##About the 2019 edition:
The schedule for the 2019 edition is published here: https://hasgeek.com/anthillinside/2019/schedule
The conference has three tracks:
- Talks in the main conference hall track
- Poster sessions featuring novel ideas and projects in the poster session track
- Birds of Feather (BOF) sessions for practitioners who want to use the Anthill Inside forum to discuss:
- Myths and realities of labelling datasets for Deep Learning.
- Practical experience with using Knowledge Graphs for different use cases.
- Interpretability and its application in different contexts; challenges with GDPR and intepreting datasets.
- Pros and cons of using custom and open source tooling for AI/DL/ML.
#Who should attend Anthill Inside:
Anthill Inside is a platform for:
- Data scientists
- AI, DL and ML engineers
- Cloud providers
- Companies which make tooling for AI, ML and Deep Learning
- Companies working with NLP and Computer Vision who want to share their work and learnings with the community
For inquiries about tickets and sponsorships, call Anthill Inside on 7676332020 or write to firstname.lastname@example.org
Sponsorship slots for Anthill Inside 2019 are open. Click here to view the sponsorship deck.
The shape of U
Tensors are the fundamental data structure for building modern machine learning programs and complex neural architectures. Unfortunately, the foundations of popular tensor libraries (numpy, tensorflow, pytorch) are hardly robust, e.g., tensor broadcasting rules are adhoc, and may cause surprising bugs. Further, the tensor library APIs expose low-level memory models to the developer, forcing them to continuously translate between their high-level mental models of data and low-level memory models. Moreover, absence of systematic ways to track shapes and perform ‘semantic’ transformations, forces them to guess the latent tensor shapes forever or add adhoc ‘shape comments’.
In short, we believe that developers trying to write deep learning programs/architectures from scratch or even trying to tweak existing model repositories and pre-trained models, are exposed to endless, unwanted ‘tensor’ misery.
In this talk, we will showcase our efforts at OffNote Labs to improve the developer experience when programming with tensors. In particular, we will discuss:
- The idea of naming dimensions of tensors and how named shapes can make tensor programming dramatically less painful.
- The tsalib library, which allows used named dimensions in Python 3.x programs with multiple backend libraries (numpy, tensorflow, pytorch, …).
- The tsanley library, which builds on tsalib, and helps catch tricky tensor shape errors at runtime and annotate existing programs with named shapes.
Nishant Sinha is an independent researcher and consultant at OffNote Labs, with broad experience in building deep learning systems (across text, vision and speech domains) and symbolic reasoning systems. Nishant helps companies understand and maneuver through the evolving deep learning/AI space and build IP, in-house teams and solutions that enable market leadership. He is also passionate about making cutting-edge research consumable and building tools that improve developer experience.
He received his Ph.D. from Carnegie Mellon University and B. Tech. in Computer Science from IIT Kharagpur.
- Repository links:
- tsalib: https://github.com/ofnote/tsalib
- tsanley: https://github.com/ofnote/tsanley