Rootconf is HasGeek’s annual conference – and now a growing community – around DevOps, systems engineering, DevSecOps, security and cloud. The annual Rootconf conference takes place in May each year, with the exception of 2019 when the conference will be held in June.
Besides the annual conference, we also run meetups, one-off public lectures, debates and open houses on DevOps, systems engineering, distributed systems, legacy infrastructure, and topics related to Rootconf.
This is the place to submit proposals for your work, and get them peer reviewed by practitioners from the community.
Topics for submission:
We seek proposals – for short and long talks, as well as workshops and tutorials – on the following topics:
- Case studies of shift from batch processing to stream processing
- Real-life examples of service discovery
- Case studies on move from monolith to service-oriented architecture
- Network security
- Monitoring, logging and alerting – running small-scale and large-scale systems
- Cloud architecture – implementations and lessons learned
- Optimizing infrastructure
- Immutable infrastructure
- Aligning people and teams with infrastructure at scale
- Security for infrastructure
If you have questions/queries, write to us on email@example.com
End to End DevOps - Design Infrastructure and Deployment Pipeline for your application
Usually in teams, you don’t always get the chance to work with the infrastructure, application development, and designing build-deployment pipelines. Most of the times, we only get to get our hands dirty in either one of these. So in this session, we are going to dive into all these aspects of Software Delivery from the perspective of a DevOps Engineer, i.e. everything except the app development from scratch, starting right from instantiating the AWS EC2 servers, to deploying the application.
In this workshop, we are going to dive into all these aspects of Software Delivery on the perspective of a DevOps Engineer, i.e. everything except the app development from scratch.
We’ll start with privisioning EC2 instances, and setting up Jenkins, Nexus3 Repository Manager, Sonarqube server in a dockerized enviroment running behind NGINX Proxy, create private docker repositry in Nexus3, and understanding each step that we take.
Moving on, we’ll configure Jenkins pipeline in Groovy, for a sample application on Github. The pipeline will have the following stages :
- Pull code from github [through automated triggers]
- Run Sonarqube Analysis on the code and post results on Sonarqube Server.
- Build MultiStage Docker build for the application and push the docker image to private docker repository on Nexus3.
- Deploy the docker image on application server.
- Slack notifications for success/ failure of build.
DevOps Cycle to be covered :
1. Setup 2 EC2 servers with NGINX and docker, nothing else at all.
2. Setup dockerized environments with Jenkins, Sonarqube, Portainer and Sonatype Nexus Repository Manager.
3. Setup Jenkins with all necessary plugins.
4. Setup Nexus with a private docker repository.
5. Design Jenkins pipeline in Groovy, to build and deploy a VueJS application.
Tools to be covered :
AWS EC2, NGINX, docker, Jenkins, Sonatype Nexus3 Repository Manager, docker-compose.
After the workshop, teh attendees will be able to :
- Design the DevOps server Architecture
- Setup Dockerized production level CI infrastructure.
- Develop Jenkins pipelines for dockerized CI/CD of applications.
Just a Linux Laptop will be enough to do the hands on exercise and understand all the concepts. An AWS Account ready to provision 2 t2.small instances, will be an added benefit for the attendees.
In my current role as Associate Architect - DevOps & SRE at Perennial Systems, I setup DevOps practices from scratch, established and leading DevOps and Site Reliability Engineering division, as well as Thailand and Singapore based clients. Involved in various pre-sales activities and designing compliant, fault tolerant, HA cloud architectures for Cloud based solutions.
In my previous organization, as a DevOps Engineer, I was among the ones who set up DevOps practices in the organization from scratch. Along with being responsible for Build and Release automation and setting up CI/CD Pipelines for all applications, I managed the AWS infrastructure, and monitored multiple Production Servers. Was also responsible for researching new technology and implementing them in the applications after a thorough POC.