Priyadarshan Patil

Priyadarshan Patil

@pd85

Prajna Kandarpa

@praj_k

Choosing Wisely: When AI Agents Make Sense

Submitted Sep 29, 2025

Choosing Wisely: When AI Agents Make Sense

Problem Statement

Agentic AI has become one of the most discussed trends in the AI ecosystem—often surrounded by both excitement and confusion. Many organizations feel pressured to “do something with agents” due to market, peer, and leadership expectations. However, not every business problem truly warrants an Agentic AI approach.

This BoF session explored how to distinguish real opportunities for AI agents from hype, and how to make informed choices between Agentic AI systems & traditional ML/rule-based solutions—balancing innovation, reliability, and cost-effectiveness. The understanding of the same is critical not only for Technology Leaders but also for Software Engineers building these systems

Topics Discussed

  • Building a shared understanding of what constitutes an AI Agent and Agentic AI (LLM + tool/function calling loop).
  • Discussing when Agentic AI systems are appropriate versus when simpler or deterministic approaches suffice.
  • Evaluating real-world use cases from participants—business question answering, HR assistants, and insurance decision automation.
  • Assessing value, risk, cost, and regulatory implications of using non-deterministic systems in enterprise contexts.
  • Exploring guard rails, such as evaluation frameworks and human-in-the-loop designs, to ensure reliability.
  • Sharing best practices and go-to resources for practitioners building or assessing agent-based systems.

Key Insights from the Discussion

  • Shared Definition Matters – Participants converged on a practical definition of Agentic AI: “LLM + tool calling in a reasoning loop until a goal is achieved.” This clarity helped separate genuine agentic capabilities (reasoning, planning, tool use) from simple LLM automation.

  • Decision Framework Emerged – A mental model evolved around four key dimensions:

    • Value Created – Does the problem need contextual reasoning or multi-step decision-making?
    • Alternatives Evaluation – Have traditional ML, automation, or rule-based approaches been evaluated before opting for agents?
    • Complexity of Problem – Does the problem’s nature truly demand reasoning or adaptive behavior?
    • Risk and Cost – Can the system tolerate non-deterministic outputs, and does the complexity and compute cost justify the benefit?
  • When Agents Don’t Fit – Consistency-critical workflows (e.g., HR policy communication) and highly regulated environments (e.g., insurance decisions) often require deterministic behavior, making traditional systems more suitable.

  • Guard Rails Are Essential – Even in valid agentic use cases, human oversight, evaluation frameworks, and bias mitigation are essential to ensure accountability and trustworthiness.

  • Valuation Assessment Gap – A majority of participants did not undertake a valuation or ROI assessment directly or through business stakeholders.

    • Contextual Reasoning Check – Does the problem genuinely require contextual reasoning or multi-step decision-making?
    • Alternatives Evaluation: Have traditional ML, automation, or rule-based approaches been evaluated before opting for agents?
    • Complexity of Problem: Does the problem’s nature truly demand reasoning or adaptive behavior?
    • Risk and Cost: Can the system tolerate non-deterministic outputs, and does the complexity and compute cost justify the benefit?
    • Value vs. System Complexity: Given that agentic systems introduce significant architectural, operational, and engineering complexity, is the business value generated substantial enough to justify the added overhead—especially when weighed against simpler alternatives and the risks associated with non-deterministic behavior?

Takeaways for Participants

  • A practical framework to decide when an AI agent adds genuine value.
    Importance of working with business to study, and create extensive use case value evaluations, scope outcomes and conduct risk assessments
  • Awareness of common pitfalls when force-fitting agents into deterministic workflows.
  • Understanding of how to balance innovation with reliability in enterprise AI adoption.
  • Building internally offers maximum customization, strategic advantage, and control over proprietary data, but it requires high internal expertise and carries significant risk and maintenance overhead. Conversely, buying a COTS platform provides faster time-to-market and lower initial costs, but sacrifices customization and strategic control while creating dependency on the vendor.
  • Real-world lessons from both successful and failed implementations of Agentic AI.
  • A set of resources and practices for responsible and effective use of agents.
  • A reminder that developers and engineering teams must think beyond technology—considering value creation, ROI, total cost of ownership, and business risk is as critical as technical implementation. The focus should be on larger business outcomes, not just the sophistication of the tech stack.

Derived Decision Framework:

Decision Framework

We have also created a sample assessment which you can use as a reference to make a decision in context of your use case. The assessment is WIP and will evolve with rapid changes happening in this space and experience of building such systems.

https://dev.apariva.ai/assessments/agentic

About Prajna:

Prajna Kandarpa is the Founder of Apariva Systems LLP, an AI and Data-first consultancy and product company. He is an experienced software engineering leader with deep expertise in modern AI solutions, enterprise platforms, and financial services. Prajna specializes in moving AI solutions from data to meaning by crafting tools that perceive and understand. Previously, he served as the Director of Engineering for the India division of CognitiveScale, Inc., where he managed large teams, defined product roadmaps for enterprise AI platforms, and worked closely with cloud partners like AWS, Azure, and RedHat OpenShift. His background includes leadership roles at multiple startups, focusing on high-impact solutions that scale to millions of users.

LinkedIn Profile:
https://www.linkedin.com/in/prajnak/

About Priyadarshan:

I work as a Solution Consultant at Sahaj.ai.

At Sahaj, I am learning to build technology platforms with extreme programming practice and work in high-trust and accountability technology teams.

I am also working as Office Lead for Sahaj Hyderabad Office with purpose to build community of Sahajeevis in Hyderabad with Strong tech and consulting culture the Sahaj way

I have 16 years of Software Engineering experience focussed on Tech Consulting and implementation and building strong technology teams. I have worked on Distributed systems, and traditional monolith platforms and built products on top of Eclipse Platform. I have diverse experience in Software Engineering - building tech platforms dealing with PII Data and requiring regulatory compliance, delivery management, customer relationship management, and presales.

LinkedIn Profile:
https://www.linkedin.com/in/priyadarshanpatil/

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

Jumpstart better data engineering and AI futures

Supported by

Meet-up sponsor

Thoughtworks is a global technology consultancy that integrates strategy, design and engineering to drive digital innovation.

Product demo sponsor