Tickets

Loading…

Agam Jain

@agamjain

Building Robust, Reliable Data Pipelines

Submitted Apr 15, 2019

This talk is about sharing our learnings and some best practices we have built over the years working with massive volume and every changing schema of data.
What we are not going to discuss is specifics of what actually technological choices we made. Or, how we scaled out system 10x year on year. Or, how we brought down the latency in processing of our data to half.
Zapr has profiled millions of users for tv consumption and in part we had to build our data processing pipeline from scratch. Initially, We started off doing it all the wrong way first by adding fields over time. As a results it came to a point where it was impossible to manage or keep track of fields that were present in data. Core concept in this talk is around how we should model the data flowing in the pipelines and the advantages it gives both from a Business as well as a technical perspective.
This talk should help anyone new into building data processing pipelines in their organization to be future proof and vary of pitfalls when dealing with data schemas which are evolving
Folks who are already doing it and have built expertise around it will be able to relate and get another perspective on how to manage data flowing in their pipelines

Outline

The flow would look like this

  • The Need for a Message Bus in building a data processing pipeline

  • For the events generated in the Message Bus, the need for a contract for data control (with examples of showing how we messed up and learnt from it).

    • explain in more detail of what a contract is
    • how it can be implemented
      • starts with hierarchical modeling of data. relations between objects
      • what are tools other there to store this complex relationship between entites
  • Discuss the gains from implementing contract control for any data that flows in the data pipeline

    • from a business perspective of improving business logic, joining with other data sets
    • from a technological ease -
  • Schema extendibility of fields in data,
  • predictability of development,
  • back dated processing - backward and forward compatibility
  • Able to break down pipeline by responsibility - teams can work on different component of the pipeline - Implementing the above for multi step data processing (enrichment)

Additional Advantages

  • Cost wise
  • Data cleaning
  • Data consistency
  • Linear pipeline

Speaker bio

I work as a Tech Architect at zapr. Working closely with data engineering teams and more specifically drive initiatives to help improve the quality of the data. In my spare time i like to read about how lot of different organizations are solving new type of problems, listen to lot of podcasts and watch football

Slides

https://docs.google.com/presentation/d/1AYRDBXkXUu-lo-4U8meh3YX4DYLZyuXlR2izcVR9T8Y/edit?usp=sharing

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hybrid access (members only)

Hosted by

Jump starting better data engineering and AI futures