19 June 2026 | Bengaluru
This is not a traditional AI conference.
We are inverting the industry standard of 80% slides and 20% substance. The focus here is on executable knowledge, operational depth, and systems that are already meeting reality.
The central question driving this event is:
How do you know it’s working?
Every submission — whether a talk, demo, discussion, or case study — should answer this with concrete evidence drawn from:
- evaluation pipelines,
- observability and tracing,
- architecture decisions,
- operational trade-offs,
- failure analysis,
- latency and reliability data,
- or cost benchmarks.
If you can show it running in production, show it.
We want submissions from teams building and operating AI systems under real constraints:
- latency,
- scale,
- governance,
- infrastructure cost,
- enterprise integration,
- reliability,
- security,
- and organizational complexity.
This is a conference for practitioners navigating the messy middle between:
“the demo worked”
and
“the system survived production.”
Anchor talks set the intellectual frame for the day by examining the gap between prototype AI systems and production reality.
We are looking for CTOs, principal engineers, architects, and senior practitioners willing to share the real architecture and operational lessons behind deployed systems.
- Workflow orchestration across enterprise systems
- Multi-agent architectures and execution models
- Sovereign or air-gapped AI deployments
- AI infrastructure and inference platforms
- Evaluation systems and deployment safety
- AI observability and tracing
- Governance, auditability, and compliance
- Reliability engineering for agentic systems
- Human-in-the-loop escalation architectures
- Cost-aware AI systems and inference optimization
- what worked,
- what failed,
- what was thrown away,
- what the metrics revealed,
- and the one architectural decision you would change today.
Requirement: No roadmap slides. No future vision decks.
These sessions are the core of the conference.
We are specifically looking for:
live demonstrations of production-grade AI systems.
Not prototypes. Not conceptual walkthroughs.
Show:
- execution traces,
- failure handling,
- evaluation outputs,
- rollback behavior,
- observability tooling,
- orchestration logic,
- cost signals,
- or operational controls.
- Multi-agent systems with retries, escalation, and trace visibility
- Evaluation pipelines catching real regressions in CI/CD
- RAG systems with measurable hallucination reduction
- AI-powered internal tooling and developer workflows
- AI governance and policy enforcement systems
- LLM routing, caching, and inference optimization
- Model evaluation dashboards and feedback loops
- Long-running workflow orchestration
- AI incident debugging and recovery tooling
- Tool use / function-calling reliability at scale
- Multimodal retrieval systems with attribution
- GPU utilization and inference infrastructure tooling
- failures,
- debugging,
- trade-offs,
- and operational safeguards.
Lightning talks are short, sharp, and insight-dense.
One orientation slide. Then the thing itself.
- Real production inference costs
- Evaluation methodologies that actually worked
- AI governance in enterprise environments
- Prompt rollback and deployment strategies
- Agent failure modes and mitigation patterns
- Measuring business impact beyond token usage
- Operating AI inside regulated environments
- Reusable agent skill systems
- Human review and escalation patterns
- AI deployment pipelines and release engineering
- Observability strategies for agent workflows
- Organizational lessons from deploying AI internally
We strongly encourage talks grounded in:
- operational metrics,
- incident learnings,
- deployment trade-offs,
- and architectural revisions.
No pitch decks.
This is for startups willing to demonstrate:
- real workflows,
- real operational problems,
- and real implementation details.
We are especially interested in startups solving:
- AI infrastructure,
- evaluation,
- governance,
- observability,
- agent orchestration,
- or enterprise integration problems.
BOFs are discussion-first rooms for practitioners dealing with difficult operational questions.
No presentations required.
Bring:
- a hard problem,
- a failed experiment,
- an architectural dilemma,
- or a question you want debated deeply.
- Agent systems that failed unexpectedly
- Evaluation systems that actually work in production
- Understanding and communicating AI infrastructure cost
- RAG vs fine-tuning: where the line changes
- AI governance in large organizations
- Observability for long-running agent workflows
- Managing “agent sprawl” across teams
- AI supply chain and model integrity
- Prompt/version rollback strategies
- Operational debugging for AI systems
Final BOF topics will be shaped collaboratively with participants on the day of the event.
We are especially interested in submissions from:
- AI/ML engineering teams
- Platform and infrastructure engineers
- Backend engineers building AI systems
- Enterprise architects
- AI platform teams
- DevOps and SRE practitioners
- Applied AI teams
- Governance and compliance practitioners
- Internal tooling teams
- Engineering leaders running AI initiatives
You do not need a perfect success story to submit.
Operational failures, architectural rewrites, scaling bottlenecks, and hard-earned lessons are highly valuable.
Strong submissions typically contain:
- a concrete production problem,
- system architecture details,
- operational lessons,
- metrics or evaluation methods,
- trade-offs,
- failure analysis,
- cost or reliability considerations,
- and implementation specifics.
Submissions will be evaluated on:
- technical depth,
- operational specificity,
- originality,
- usefulness to practitioners,
- clarity of lessons learned,
- and relevance to production AI systems.
Preference will be given to talks grounded in:
- real deployments,
- measurable outcomes,
- and engineering trade-offs.
1 June 2026
Ramakrishna Reddy Yekulla leads the technical strategy and operationalization of AI models within Red Hat’s Data + AI group.
A long-time open-source contributor, he has worked on projects including Fedora, Django, GNOME, and GlusterFS. His interests span AI infrastructure, systems architecture, functional programming, and large-scale operational engineering.
💬 Comment on the discussion forum:
Enterprise AI in Production discussions
📞 Call/WhatsApp: +91 7676332020
📧 Email: info@hasgeek.com