Selected Talks for PyConf Hyderabad 2017 have been announced. Please Check the Confirmed Proposals section for the Selected Talks
Guidelines for Proposal Submission
Following are the guidelines for proposal submission
- Please mention type of Proposal as given below in the Title of the Proposal.
- The proposal should have an objective with clear expectation for the audience.
- The Proposal description should be short and to the point.
- The proposal should have proper prerequisites like environment setup, library version.
- No proposal will be selected without a link to appropriate session content like presentation, pdf, code snippets etc.
- Proposal content should adhere to code of conduct.
- Proposal content links can be updated later.
- Proposal content shouldn’t have a company name throughout the content. Mention of the employer is allowed only at the beginning of the content (presentation/pdf).
- Background image/wallpaper shouldn’t contain company name/logos.
- For any questions, please write to firstname.lastname@example.org.
We have three kind of Proposals - General Talks, Lightning Talks and Workshops. Please mention the Proposal type in the Title of the Proposal. Give a Title like Proposal Type : Proposal Title
These are the traditional talk sessions scheduled during the first day of conference. They will be conducted on Day 2 of Conference, Sunday, 8th Oct. The length of these tracks are 45 minutes.
These are short length talks that will be conducted on Day 2 of Conference, Sunday, 8th Oct. The time limit is 5 minutes. But we can extend it depending on number of talks submitted.
As with the talks, we are looking for Workshops that can grow this community at any level. We aim for Workshops that will advance Python, advance this community, and shape the future. Each session runs for 6 full hours plus a break for lunch. There will be 2 workshops going parallely on Day 1 of Conference, Saturday, 7th Oct in the same venue that hosts the main conference. Workshop I is aimed for Begineers while Workshop II is a Advaced Session aimed for Professionals.
Themes and Topics
These will be the themes and topics
- Core Python and Python 3 features
- Concurrent and Asynchronous programming in Python
- Data Science and Analysis
- Web Development
- Python and IOT
- Functional Programming
- Artificial Intelligence
- Continuous integration and Deployment
- Scientific Computing
- Cloud computing with Python
- 31st August, 2017 : Deadline for Proposal Submission
- 16th September, 2017 : Talk selection and announcement
Workshop : Big Data Carpentry with Python
Automation is not replacement, but an aid to manual labour. We at Sosio have internally cut-short our data processing time to 50% by automating the monotonous simple tasks. While automating mundane tasks speeds up the processes, saving us time and energy, automation is always not an easy answer. It is often complex, requires human intervention, and even if set up successfully needs constant monitoring and review.
There are myriad data pipelining frameworks and libraries available for every use case imaginable. The complexity of handling such diversity in tooling, and uniqueness of the problem statement leads to duplicated efforts and reinvention of the wheel.
The session will primarily help the audience with an understanding of Pipeline Frameworks, Workflow Automation and the relevant pythonic toolsets that help achieve the same. We will go through some common design patterns, tradeoffs and available libraries / frameworks for designing such systems. We will focus on topics of reusability, consistency, availability, idempotency, and scalability of the systems.
We will take up basic data pipelining concepts as well as practical use cases for using data pipelines with Python. We will cover some of the popular task and data workflow tools like Celery, Luigi, and Airflow and touch on some over arching concepts when building a data pipeline.
The principles can be applied to archival, warehousing and analytics, and low-latency hot storage data.
We will solve few example problems during the workshop to make these points concrete. Much of what is being presented is based on our experience of trying different libraries learning lessons the hard way, as to what did not work, and what made things easy for us.
By the end of the session, one should be comfortable with
- Assessing if a pipeline framework is right for your the dataset.
- Comparing pipeline tools and writing tasks.
- Parallelising and Scaling tasks
- Approaching data pipelining with a python toolset
Specifically we will be talking about :
- Understanding a queue, constructs of producer and consumer
- Writing and Deploying tasks using Celery
- Scaling celery workers and monitoring with Flower
- First Steps with Dask
- Data pipelines and DAGs
- First steps with Luigi and Airflow
- Custom and Advanced Tasks with Luigi and Airflow
- Pipelines and Spark Streaming - listening to twitter stream
- Pipelines and Django Channels - pub sub and data flow
- Intermediate understanding of Python
- Basic understanding of Bash Command
- Basic of Deployment and working with remote servers
- Interest in Data and Systems
Saket is founder of Sosio. Sosio caters to the large scale data needs of enterprises, and non-profits. He has been semi-active in tech-conferences attending and delivering talks across the globe. In his personal capacity he has introduced Python to more than 500 individuals, and conducted training sessions at corporate houses like Oracle. In his previous life, he spent good chunk of his time optimising computational mechanics algorithms.