BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//HasGeek//NONSGML Funnel//EN
DESCRIPTION:Call for Proposals
X-WR-CALDESC:Call for Proposals
NAME:Enterprise AI in Production
X-WR-CALNAME:Enterprise AI in Production
REFRESH-INTERVAL;VALUE=DURATION:PT12H
SUMMARY:Enterprise AI in Production
TIMEZONE-ID:Asia/Kolkata
X-PUBLISHED-TTL:PT12H
X-WR-TIMEZONE:Asia/Kolkata
BEGIN:VEVENT
SUMMARY:Enterprise AI in Production
DTSTART:20260619T083000Z
DTEND:20260619T123000Z
DTSTAMP:20260511T173154Z
UID:session/smDc6DRnijsbjn8bZeNPQ@hasgeek.com
SEQUENCE:19
CREATED:20260420T050805Z
DESCRIPTION:## Call for talks\, demos & discussions\n\n### Enterprise AI i
 n Production meet-up\n\n**19 June 2026 | Bengaluru**\n\nThis is not a trad
 itional AI conference.\n\nWe are inverting the industry standard of **80% 
 slides and 20% substance**. The focus here is on executable knowledge\, op
 erational depth\, and systems that are already meeting reality.\n\nThe cen
 tral question driving this event is:\n\n> **How do you know it’s working
 ?**\n\nEvery submission — whether a talk\, demo\, discussion\, or case s
 tudy — should answer this with concrete evidence drawn from:\n* evaluati
 on pipelines\,\n* observability and tracing\,\n* architecture decisions\,\
 n* operational trade-offs\,\n* failure analysis\,\n* latency and reliabili
 ty data\,\n* or cost benchmarks.\n\nIf you can show it running in producti
 on\, show it.\n\n---\n\n## What we are looking for\nWe want submissions fr
 om teams building and operating AI systems under real constraints:\n\n* la
 tency\,\n* scale\,\n* governance\,\n* infrastructure cost\,\n* enterprise 
 integration\,\n* reliability\,\n* security\,\n* and organizational complex
 ity.\n\nThis is a conference for practitioners navigating the messy middle
  between:\n\n> “the demo worked”\n> and\n> “the system survived prod
 uction.”\n\n---\n\n# Submission formats\n\n## 1. Anchor talks\n### 25-mi
 nute technical deep dive + 5-minute Q&A\nAnchor talks set the intellectual
  frame for the day by examining the gap between prototype AI systems and p
 roduction reality.\n\nWe are looking for CTOs\, principal engineers\, arch
 itects\, and senior practitioners willing to share the real architecture a
 nd operational lessons behind deployed systems.\n\n### Themes we are espec
 ially interested in\n* Workflow orchestration across enterprise systems\n*
  Multi-agent architectures and execution models\n* Sovereign or air-gapped
  AI deployments\n* AI infrastructure and inference platforms\n* Evaluation
  systems and deployment safety\n* AI observability and tracing\n* Governan
 ce\, auditability\, and compliance\n* Reliability engineering for agentic 
 systems\n* Human-in-the-loop escalation architectures\n* Cost-aware AI sys
 tems and inference optimization\n\n### Strong anchor talks should include\
 n* what worked\,\n* what failed\,\n* what was thrown away\,\n* what the me
 trics revealed\,\n* and the one architectural decision you would change to
 day.\n\n**Requirement:** No roadmap slides. No future vision decks.\n\n---
 \n\n## 2. Live demos\n### 15 minutes each\nThese sessions are the core of 
 the conference.\n\nWe are specifically looking for:\n\n> live demonstratio
 ns of production-grade AI systems.\n\nNot prototypes. Not conceptual walkt
 hroughs.\n\nShow:\n* execution traces\,\n* failure handling\,\n* evaluatio
 n outputs\,\n* rollback behavior\,\n* observability tooling\,\n* orchestra
 tion logic\,\n* cost signals\,\n* or operational controls.\n\n### Example 
 areas\n* Multi-agent systems with retries\, escalation\, and trace visibil
 ity\n* Evaluation pipelines catching real regressions in CI/CD\n* RAG syst
 ems with measurable hallucination reduction\n* AI-powered internal tooling
  and developer workflows\n* AI governance and policy enforcement systems\n
 * LLM routing\, caching\, and inference optimization\n* Model evaluation d
 ashboards and feedback loops\n* Long-running workflow orchestration\n* AI 
 incident debugging and recovery tooling\n* Tool use / function-calling rel
 iability at scale\n* Multimodal retrieval systems with attribution\n* GPU 
 utilization and inference infrastructure tooling\n\n### We especially valu
 e demos that show\n* failures\,\n* debugging\,\n* trade-offs\,\n* and oper
 ational safeguards.\n\n---\n\n## 3. Lightning talks\n### 12 minutes each\n
 Lightning talks are short\, sharp\, and insight-dense.\n\nOne orientation 
 slide. Then the thing itself.\n\n### Suggested themes\n* Real production i
 nference costs\n* Evaluation methodologies that actually worked\n* AI gove
 rnance in enterprise environments\n* Prompt rollback and deployment strate
 gies\n* Agent failure modes and mitigation patterns\n* Measuring business 
 impact beyond token usage\n* Operating AI inside regulated environments\n*
  Reusable agent skill systems\n* Human review and escalation patterns\n* A
 I deployment pipelines and release engineering\n* Observability strategies
  for agent workflows\n* Organizational lessons from deploying AI internall
 y\n\nWe strongly encourage talks grounded in:\n\n* operational metrics\,\n
 * incident learnings\,\n* deployment trade-offs\,\n* and architectural rev
 isions.\n\n---\n\n## 4. Startup showcase\n### Live product demonstration\n
 No pitch decks.\n\nThis is for startups willing to demonstrate:\n\n* real 
 workflows\,\n* real operational problems\,\n* and real implementation deta
 ils.\n\nWe are especially interested in startups solving:\n* AI infrastruc
 ture\,\n* evaluation\,\n* governance\,\n* observability\,\n* agent orchest
 ration\,\n* or enterprise integration problems.\n\n---\n\n## 5. Birds of a
  Feather (BOF) discussions\nBOFs are discussion-first rooms for practition
 ers dealing with difficult operational questions.\n\nNo presentations requ
 ired.\n\nBring:\n* a hard problem\,\n* a failed experiment\,\n* an archite
 ctural dilemma\,\n* or a question you want debated deeply.\n\n### Possible
  BOF themes\n* Agent systems that failed unexpectedly\n* Evaluation system
 s that actually work in production\n* Understanding and communicating AI i
 nfrastructure cost\n* RAG vs fine-tuning: where the line changes\n* AI gov
 ernance in large organizations\n* Observability for long-running agent wor
 kflows\n* Managing “agent sprawl” across teams\n* AI supply chain and 
 model integrity\n* Prompt/version rollback strategies\n* Operational debug
 ging for AI systems\n\nFinal BOF topics will be shaped collaboratively wit
 h participants on the day of the event.\n\n---\n\n# Who should submit?\nWe
  are especially interested in submissions from:\n* AI/ML engineering teams
 \n* Platform and infrastructure engineers\n* Backend engineers building AI
  systems\n* Enterprise architects\n* AI platform teams\n* DevOps and SRE p
 ractitioners\n* Applied AI teams\n* Governance and compliance practitioner
 s\n* Internal tooling teams\n* Engineering leaders running AI initiatives\
 n\nYou do not need a perfect success story to submit.\n\nOperational failu
 res\, architectural rewrites\, scaling bottlenecks\, and hard-earned lesso
 ns are highly valuable.\n\n---\n\n# What strong submissions usually includ
 e\nStrong submissions typically contain:\n* a concrete production problem\
 ,\n* system architecture details\,\n* operational lessons\,\n* metrics or 
 evaluation methods\,\n* trade-offs\,\n* failure analysis\,\n* cost or reli
 ability considerations\,\n* and implementation specifics.\n\n---\n\n# How 
 submissions will be reviewed\nSubmissions will be evaluated on:\n\n* techn
 ical depth\,\n* operational specificity\,\n* originality\,\n* usefulness t
 o practitioners\,\n* clarity of lessons learned\,\n* and relevance to prod
 uction AI systems.\n\nPreference will be given to talks grounded in:\n* re
 al deployments\,\n* measurable outcomes\,\n* and engineering trade-offs.\n
 \n---\n\n## Deadline for submissions\n**1 June 2026**\n\n---\n\n## About t
 he editor\nRamakrishna Reddy Yekulla leads the technical strategy and oper
 ationalization of AI models within Red Hat’s Data + AI group.\n\nA long-
 time open-source contributor\, he has worked on projects including Fedora\
 , Django\, GNOME\, and GlusterFS. His interests span AI infrastructure\, s
 ystems architecture\, functional programming\, and large-scale operational
  engineering.\n\n---\n\n## Queries & contact information\n💬 Comment on 
 the discussion forum:\n[Enterprise AI in Production discussions](https://h
 asgeek.com/fifthelephant/enterprise-ai-in-production/comments?utm_source=c
 hatgpt.com)\n\n📞 Call/WhatsApp: +91 7676332020\n📧 Email: [info@hasge
 ek.com](mailto:info@hasgeek.com)\n
LAST-MODIFIED:20260507T124921Z
LOCATION:Bangalore - https://hasgeek.com/fifthelephant/enterprise-ai-in-pr
 oduction-cfp/
ORGANIZER;CN="The Fifth Elephant":MAILTO:no-reply@hasgeek.com
URL:https://hasgeek.com/fifthelephant/enterprise-ai-in-production-cfp/
BEGIN:VALARM
ACTION:display
DESCRIPTION:Enterprise AI in Production in 5 minutes
TRIGGER:-PT5M
END:VALARM
END:VEVENT
END:VCALENDAR
