Anthill Inside 2019

A conference on AI and Deep Learning

Make a submission

Submissions are closed for this project

Taj M G Road, Bangalore, Bangalore

About the 2019 edition:

The schedule for the 2019 edition is published here: https://hasgeek.com/anthillinside/2019/schedule

The conference has three tracks:

  1. Talks in the main conference hall track
  2. Poster sessions featuring novel ideas and projects in the poster session track
  3. Birds of Feather (BOF) sessions for practitioners who want to use the Anthill Inside forum to discuss:
    - Myths and realities of labelling datasets for Deep Learning.
    - Practical experience with using Knowledge Graphs for different use cases.
    - Interpretability and its application in different contexts; challenges with GDPR and intepreting datasets.
    - Pros and cons of using custom and open source tooling for AI/DL/ML.

Who should attend Anthill Inside:

Anthill Inside is a platform for:

  1. Data scientists
  2. AI, DL and ML engineers
  3. Cloud providers
  4. Companies which make tooling for AI, ML and Deep Learning
  5. Companies working with NLP and Computer Vision who want to share their work and learnings with the community

For inquiries about tickets and sponsorships, call Anthill Inside on 7676332020 or write to sales@hasgeek.com


Sponsors:

Sponsorship slots for Anthill Inside 2019 are open. Click here to view the sponsorship deck.


Anthill Inside 2019 sponsors:


Bronze Sponsor

iMerit Impetus

Community Sponsor

GO-JEK iPropal
LightSpeed Semantics3
Google Tact.AI
Amex

Hosted by

Anthill Inside is a forum for conversations about Artificial Intelligence and Deep Learning, including: Tools Techniques Approaches for integrating AI and Deep Learning in products and businesses. Engineering for AI. more

Prakhar Srivastava

PySpark for GeoSpatial Data

Submitted Aug 20, 2019

GeoSpatial data is the key data source when it comes to external data sources but this data is often too large to process. This is where PySpark comes in to reduce the computation time and makes the whole code more than 5 times faster. This workshop aims to solve the problem of calculating land covered with greenry of a region (Delhi) using Satellite images and Python.

Outline

Part - 1
The session will be started with an introduction to the external datasources viz. Satellite images (Landsat-8, Sentinel-2), the shapefiles corpus and the basics of GeoSpatial Data.

Libraries used: Shapely, Geopandas, Rasterio

Part - 2
The session would further be proceeded with the a very essential use case of Geospatial data, to find green cover of a given region. This particular use case will be focused on Delhi and how the vegetation of the region has changed in the span of 5 years. This would require an introduction to the concept of Green Cover and how this can be calculated using satellite imagery. I’ll also show how to parallelize this process using python libraries.

Libraries used: Numba, Numpy, Multiprocessing, Multithreading

Part - 3
This part of the session would deal with the carrying the code that has been done till now, and to modified it in a way such that PySpark would be able to run this existing python code at a much faster speed.
This part would deal with explaining the idea behind Spark, why does it works so well, how to hack your way to get GeoSpatial data working with PySpark.

Libraries used: PySpark

Data Sources: Nasa (LandSat - 8), ESA (Sentinel - 2)

Requirements

The workshop would be self-contained with all the code and dependencies ready to execute.

Speaker bio

Greetings to everyone,

I’m Prakhar Srivastava, researcher and open source lover. I have worked on Deep learning models for almost 4 years now and mentor the deeplearning.ai course on Coursera. I’ve researched with India’s leading research college, IIIT Delhi (http://midas.iiitd.edu.in/team/prakhar.html). I’ve worked as a student developer in GSoC‘18 under OpenAstronomy and as a team leader at Stanford Scholar initiative. I’ve hosted complete lecture sessions on Deep learning in my college under IEEE. I’m currently working as a Machine learning Engineer at Atlan (www.atlan.com), a leading data democratization startup where we work with humans of data around the world to help them do their lives’ best work.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

lavanya TS

Building Products with ML: A Workshop for Product & Engg Managers

Machine learning (ML) has seen substantial adoption, and a large number of data science teams are being created. Taking on ML projects requires product managers and engineers to learn an ML approach to problem solving, to be able to effectively work with data scientists and data engineers. There exists a huge gap in understanding - both cultural & technical - in most organizations since product o… more

20 Aug 2019