In 2016, we saw a growing interest in NLP, Computer Vision and Deep Learning in The Fifth Elephant community. Consequently, we launched Deep Learning conference in 2016 and learned that the most pressing need in 2016 was understanding what products to build using AI. For once, technology was ahead of its time. The challenge we faced was which business and domain problems to solve with AI.
Between 2016 and 2018, participants and speakers shared work from medicine, e-commerce and advertising domains to navigate this challenge, and to explain:
- How NLP and Computer Vision techniques were utilized for different cases.
- Research concepts applied in practice: knowledge graphs, uncertainty, explainability (among others).
Watch videos from the previous editions on: https://hasgeek.tv/anthillinside to learn more about Anthill Inside.
About the 2019 edition of Anthill Inside:
At the 2018 edition of Anthill Inside, a conversation that got most traction and interest was about the hubs and spokes of AI. At a Birds of Feather (BOF) session on the hubs and spokes of AI, we discussed:
- Data acquisition and health of data; incorporating label uncertainty while training models.
- Retraining models; detecting and handling concept drifts in the data.
- DevOps and best practices for model change management; monitoring model health
- Human intervention factors, including incorporating learnings from the feedback loop; pointers for moving in the direction of auto-pilot mode.
The 2019 edition of Anthill Inside builds on this conversation. We will examine the following questions at this year’s edition
Theme and topics for which to submit proposals for speaking at the 2019 edition:
Share experience stories and case studies of data acquisition
1.1 What are the sources from which you acquire training data? For example, how did you solve the cold start problem in your domain?
1.2 How did you manage acquisition of life-cycle data: did you acquire the data internally and labelled the data; or, did you acquire the data from external sources and labelled the data internally; or did you acquire externally labelled data? What were the challenges with storing data in each case?
Data storage case studies: we are specifically interested in hearing about:
2.1 What do you do with data that has lost its currency?
2.2. How do you deal with privacy issues for vast amounts of irrelevant data?
Tools for AI, ML and Deep Learning. Here, we want to hear about:
3.1 Whether you use third-party tools? If yes, why?
3.2 Do you adopt and retrofit existing tools? Again, why? Give us a detailed case study.
3.3 Do you develop tools for AI/ML/DL in-house? Why are in-house tools necessary for your case?
Storing data on the cloud and cloud strategy.
4.1 Do you have a multi-cloud strategy? Share this with the community.
4.2 How do you deal with lock-in situations with single providers?
GPU versus CPU – when you do use either and why? How do the strengths and limitations of each play out for your use case?
Cost of developing ML models: is there a quantifiable way of doing this?
CapEx and OpEx for AI – have you worked on this? Share your insights with the community.
Anthill Inside 2019 is a single track conference. Birds of Feather (BOF) sessions, round table discussions and office hours with speakers will be held in parallel with talks in the main auditorium.
Date: 23 November 2019
Venue: NIMHANS Convention Centre, Bangalore
Tutorials and workshops:
We will host a variety of workshops and tutorials, before and after the conference, as well as in other cities (Delhi and Hyderabad, besides Bangalore) between June and December 2019. If you want to teach a tutorial or a hands-on workshop on the following topics, submit a proposal here:
- Knowledge graphs
- Bayesian network
- Statistical techniques for ML modelling
- Computer Vision
- Natural Language Processing
You can suggest topics for someone to teach tutorials and workshops, or instructors from who you’d like to learn more. Suggestions and submissions have to be made here.
Submit a proposal to speak here. If you have questions, write to us on email@example.com or call us on 7676332020
Attention based sequence to sequence models for natural language processing
- Introduction to sequence models
- Why sequence to sequence models? Build a sequence to sequence model on sample data.
- What is attention? Enhance the model and understand the value of attention
- Transformer architecture: sequence to sequence modeling using self-attention
- Build a transformer model on sample data