Feb 2026
23 Mon
24 Tue
25 Wed
26 Thu
27 Fri 09:30 AM – 05:00 PM IST
28 Sat 09:30 AM – 05:00 PM IST
1 Sun
Prasanth Jayaprakash
Submitted Jan 10, 2026
Most teams bolt AI onto existing processes: same Jira workflows, same sprint rituals, same onboarding docs, just with Copilot autocompleting some code. But what if you could start fresh? What if you had a greenfield project and could design your entire delivery process knowing what AI can do today?
We got that chance. Equal Experts and Travelopia partnered to rebuild a legacy lead scoring system from scratch. No legacy code to migrate, no existing architecture to preserve, just a business case and a blank repo. We used this clean slate to ask a dangerous question: what would an SDLC look like if we designed it around AI from day one? The answer looked nothing like what we’d been taught. Engineers and Product Owners paired together to write PRD documents in markdown, committed directly to the repo, creating a shared context that both humans and LLMs could consume. We generated stakeholder reports directly from commit histories, and structured documentation so well that new developers achieved fully self-service onboarding without formal KT sessions. Three months later, we shipped to production. Seven months on, the system handles 5,000+ leads monthly.
This talk isn’t about adding AI to your workflow. It’s about what becomes possible when you design the workflow around AI. I’ll share the specific patterns we used, the guardrails that kept AI from over-engineering (human-owned architecture, open-source tools like Context7 for up-to-date library documentation), and a practical framework for teams ready to rethink, not just optimise, how they deliver software.
A blueprint for AI-native delivery when starting fresh: Learn how to design a ceremony-light, Git-native SDLC from scratch, where engineers and Product Owners collaborate on markdown PRDs in the repo, commit histories drive stakeholder reporting, and documentation doubles as AI context and self-service onboarding material.
Guardrails for letting AI do the heavy lifting without losing control: Understand how to draw clear boundaries between human decisions (architecture, security, design) and AI execution (implementation, tests, documentation), with open-source tooling patterns that keep your AI current and your codebase maintainable.
This session is for engineering leads, architects, and senior developers who are about to start a new project or rewrite and are wondering: should we do this differently? It’s also for delivery managers frustrated with process overhead who suspect there’s a better way but haven’t seen it proven. You’ve experimented with AI coding tools but haven’t had the chance to design a delivery process around them. You’ll leave with a concrete model for what’s possible when you start from zero.
Prasanth Jayaprakash is a Principal Consultant at Equal Experts with nearly 17 years of hands-on experience building production systems across backend (Java, Kotlin, Groovy) and frontend (React, Vue) technologies.
He recently led an experimental AI-native delivery engagement with Travelopia, testing whether AI could fundamentally change how teams coordinate work, not just how fast they write code. The result: a lead-scoring platform built in three months, running in production for seven months without major issues.
His current work focuses on two areas: transforming software delivery practices using LLM-powered workflows, and building agentic AI systems using RAG and tool integration. This talk shares the operational framework, specific guardrails, and honest assessment of when ceremony-light delivery works and when it doesn’t.
Hosted by
Supported by
Platinum sponsor
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}