This livestream is restricted
Already a member? Login with your membership email address
Dec 2025
1 Mon
2 Tue
3 Wed
4 Thu 09:00 AM – 05:15 PM IST
5 Fri
6 Sat
7 Sun
Submitted Nov 24, 2025
AI has augmented and changed how analytics teams ingest data, generate insights, and build decision systems. But as AI-driven analytics becomes more agentic i.e. writing language to SQL, accessing datasets, generating dashboards, and triggering workflows, achieve simple to complex outcomes, the risk surface increases and needs some controls.
Poor guardrails can lead to:
a) Unauthorized data exposure
b) Cost explosions from unbounded queries
c) Silent model drift impacting business decisions (resulting in bias?)
d) Incorrect insights generated with high confidence
e) Compliance violations during AI-initiated data operations
This BoF session brings practitioners together to answer one question:
How do we make AI-driven analytics safe. Without breaking governance, trust, or compliance.
This is not a talk or a lecture. This is real world sharing. Patterns, failures, guardrails which are being used in production analytics environments in enterprises.
Data Platform / Analytics Engineering Teams (hands-on with warehouses, catalogs, and BI systems)
Engineers or Teams building co-pilot systems (LLM-based workflows, agents, semantic layers, and multi-step/hybrid systems)
Compliance and Risk Teams (working with regulatory, process for handling regulated or sensitive data)
Ravi Balgi is a Architect at Datanimbus with 15 years of experience in designing and building large data driven applications within enterprises across multiple domains. Last 8 years have been spent on building large realtime data applications, migrating from legacy systems
Hosted by
Supported by
Masterclass sponsorship
Round table partners
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}