The Fifth Elephant 2017
On data engineering and application of ML in diverse domains
Jul 2017
24 Mon
25 Tue
26 Wed
27 Thu 08:15 AM – 10:00 PM IST
28 Fri 08:15 AM – 06:25 PM IST
29 Sat
30 Sun
Vimal Sharma
Apache Atlas is the one stop solution for data governance and metadata management on enterprise Hadoop clusters. Atlas has a scalable and extensible architecture which can plug into many Hadoop components to manage their metadata in a central repository. Vimal Sharma will review the challenges associated with managing large datasets on Hadoop clusters and demonstrate how Atlas solves the problem. Vimal will focus on Cross Component lineage tracking capability of Apache Atlas. Vimal will also discuss the upcoming features and roadmap of Apache Atlas.
The talk is intended to introduce Apache Atlas and its capabilities to audience. It is also intended to invite potential developers to contribute to the Apache Atlas project.
Apache Atlas Project Introduction
Data Governance challenge and use case scenarios
Atlas architechture
Cross component lineage capability of Atlas
Apache Ranger integration to enforce tag based policies
Atlas TypeSystem
Model Spark Dataframe as an Atlas type
Demo based on the above model
Future/Roadmap
Invitation to contribute
More details on Order of presentation
Why Apache Atlas(What are the use cases)
Enterprises have 100s of ETL pipelines wherein developers take the source data, apply transformations and persist the result into the warehouse. Now, if an upstream pipeline breaks/fails, how does the owner of current dataset narrow down on the cause and culprit ETL pipeline. Further, if the current pipeline breaks, the owner has no mechanism to alert the owners of downstream processes. A tool which could keep track of the provenance/lineage/impact of a dataset would solve this issue. Atlas has the capability to track lineage of the datasets.
ETL redundancy is another striking issue in current enterprise Hadoop deployments. Many developers process data and persist it to the warehouse. They don’t have any mechanism to detect if the result they need is already computed and resides in a dataset. Using the lineage diagram and classification feature of Atlas, developers can look into the details of derived datasets and skip the expensive processing if the information is already available in one of the derived datasets.
Further, enterprises need to adhere to compliance policies which span multiple datasets across components like Hive, HBase, HDFS etc. How can the business make sure that a particular policy is enforced across datasets in these components. Datasets can be tagged in Atlas and Ranger can use its Tag based policy feature to enforce constraints
Cluster admin may need to periodically clean up the unused/dormant datasets from the warehouse. How can the admin narrow down on the candidate datasets for archival. Atlas is useful in determining the relevance of a dataset on the basis of the number of tags and downstream datasets derived from it.
What is Apache Atlas
Apache Atlas is the governance and metadata framework for Hadoop. Atlas has a scalable and extensible architecture which can plug into many Hadoop components to manage their metadata in a central repository. By virtue of its extensible TypeSystem, any arbitrary component(not necessarily a Hadoop component) can be modelled to capture the metadata of its datasets and events. The metadata events can then be classified using tags which can further be used to enforce security policies by Ranger. When a dataset derives from another dataset, the event can be registered and Atlas will capture the lineage relationship.
How
Atlas provides inbuilt support for some Hadoop components like Hive, Storm and Sqoop. This means that whenever new datasets and events are created in these components, Atlas captures the metadata of those events. For new components like Spark, first the model of the metadata to be captured needs to be defined and registered with Atlas. Once the model is in place, datasets and events occuring in that component can be registered with Atlas using its rich REST API.
The demo for the presentation will cover these parts. First, Spark datasets will be modeled and registered with Atlas. Then, a realistic use case will be considered where we will capture a lineage relationship across components like HDFS and Kafka. We will then go to Atlas UI and inspect lineage and other features like tag based classification, search and advanced search
Vimal Sharma is Apache Atlas PMC and Committer at Hortonworks. Vimal graduated from IIT Kanpur with a B.Tech in Computer Science. Vimal is highly passionate about Hadoop stack and has previously worked on scaling backend systems at WalmartLabs using Spark and Kafka.
Vimal was a speaker at ApacheCon BigData 2017
https://www.slideshare.net/vimalsharma357/fifth-elephant-apache-atlas-talk
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}