arrow_back Making a contextual recommendation engine using Python and Deep Learning at ParallelDots
Critical pipe fittings: What every data pipeline requires
Submitted by Yagnik (@yagnik) on Wednesday, 27 May 2015
The talk aims to provide data builders key aspects that will help them build their own frameworks and tools to add some transparency to their data pipeline and ship faster.
Most organizations leveraging data do so on technologies such as Hadoop, Spark or Vertica. All these allow organizations to process data but nearly always these organizations maintain code base / frameworks etc which the builders use to clean, process and query this data. While building Starscream (Shopify’s dimensional modelling framework on top Spark), we learnt various lessons about numerous building blocks that don’t come as part of these technologies yet are critical for smooth functioning and transparency of our data pipeline. The talk aims to provide the audience with these building blocks such as metadata, incremental builds etc, their use case and how they helped Shopify ship faster.
Basic experience with processing data
Yagnik is a software developer at Shopify.