Optimizing costs of cloud infrastructures

Optimizing costs of cloud infrastructures

Practical case studies from enterprises and startups

Make a submission

Accepting submissions till 31 Jan 2022, 11:00 PM

The evolution of the cloud in the last decade has radically simplified software deployment. Undoubtedly, major providers like AWS, Azure, GCP, Alibaba Cloud have removed entry barriers for developers to either launch a new product or to scale infinitely. However, no software company can keep scaling without taking a look at the cost of infrastructure as a percentage of revenue.

Many companies today are struggling with cloud costs eating up a significant portion of their gross margins. In this programme, we will discuss the following:

  1. Mental Models for understanding cloud costs according to the stages of a company’s growth:
    a. Attribution
    b. Governance
    c. Baselining
    d. Leakage
  2. Ways to analyze spends, extract insights, and spot anomalies.
  3. Infrastructure planning, forecasting:
    a. Reservation Cycles.
    b. Optimization choices based on the nature of the workloads.

After the conference you will walk away with:
1. Tips and techniques to reduce costs on infrastructure on clouds such as AWS.
2. Process and governance model to run a cost-optimized infrastructure sustainably.

Who should participate:
1. Engineering Leaders (Directors, VP Engineering, Architects)
2. SRE and Infrastructure Managers and Architects.
3. Developers running infrastructure ranging from tens to thousands of servers.
4. Anyone interested in reducing their cloud spends.

If you want to share your case study, share details at https://hasgeek.com/rootconf/optimizing-costs-of-cloud-infrastructure/comments

Contact information: Join the Rootconf Telegram group at https://t.me/rootconf or follow @rootconf on Twitter.
For inquiries, contact Rootconf at rootconf.editorial@hasgeek.com or call 7676332020.

Hosted by

Rootconf is a forum for discussions about DevOps, infrastructure management, IT operations, systems engineering, SRE and security (from infrastructure defence perspective). more
Prakhar Verma

Prakhar Verma

@prakharverma

Optimizing Cost of Data Platform Workloads

Submitted Jan 31, 2022

Today, data platforms workloads constitute a major portion of the cloud spend. With every company increasingly using data driven decisions, this share of cost can wildly go out-of-hand if not governed and optimized effectively.

At Capillary, we have been building data driven products since the last 12 years. Over the years, our data platform has evolved through many big data systems to a domain-centric, multi-tenant data lake powered by Spark running on EMR and Databricks. The data lake is deep embedded inside our Engagement platform, Loyalty platform, Insights and AI/ML products.

This talk will focus on how we do data platform cost governance and manage the cost with growing adoption of more and more data related feature requirements.

Key takeaways

Participants will learn

Co-relating data platforms metrics and cloud cost metrics to derive insights
Tuning Data Engineering pipelines to reduce wastage (Query Optimizations)
Fleet design for ETL pipeline with cost considerations (Instance Selection, On-demand/Spot management)
Architectural patterns while designing for interactive workload (Reports / Dashboards)
Cost Governance around Ad Hoc Analytics (Notebooks)

Primary focus of the talk will be on Apache Spark based systems.

Prakhar Verma is the Principal Architect with Capillary Technologies. He has over 12 years of experience in building data-driven products.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Make a submission

Accepting submissions till 31 Jan 2022, 11:00 PM

Hosted by

Rootconf is a forum for discussions about DevOps, infrastructure management, IT operations, systems engineering, SRE and security (from infrastructure defence perspective). more