arrow_back Unless you measure it; you can’t improve it - Data pipelines for your business KPIs and KRAs
Unlock sub-second SQL analytics over terrabytes of data with Hive and Druid
Submitted by Nishant Bangarwa (@nishantbangarwa) on Tuesday, 6 June 2017
Full talk for data engineering track
Druid is an open-source analytics data store designed for business inteligence OLAP queries on timeseries data. Druid provides low latency real-time data ingestion, flexible data exploration and fast data aggregation. Many organizations have deployed Druid to analyze ad-tech, dev-ops, network traffic, website traffic, finance, sensor and IOT data.
Druid’s strong points are very compelling but there are some important features like large joins and full SQL support. This talk will present how Druid and Apache Hive can be used together to index large amounts of data and query Druid data sources from Hive using SQL, and execute complex Hive queries on top of Druid data sources. We will walk through the architecture of the solution leveraging Apache Calcite to overcome the challenge of transparently generating Druid JSON queries from the input Hive SQL queries. We conclude with a demo highlighting the performant and powerful integration of these projects.
Introduction to HIVE and Druid
Why HIVE + Druid
Nishant is Druid PMC member and Software Engineer at Hortonworks. He is part of Business Intelligence team at Hortonworks. Prior to that he was part of Metamarkets backend team and was responsible for analytics infrastructure, including real-time analytics in Druid. He holds a B.Tech in Computer Science from National Institute of Technology, Kurukshetra, India.