The Fifth Elephant 2025 Annual Conference CfP
Speak at The Fifth Elephant 2025 Annual Conference
Neel Shah
Submitted May 27, 2025
LLMs are game-changing in today’s world but understanding their behavior is much more crucial as it can improve the output a thousand times. Simply observing the input and output – the “prompt” and the “response” is no longer sufficient for building robust and dependable LLM-powered applications.
LLM Observability refers to a more in-depth approach to monitoring large language models, where it captures not only the basic outputs but also metrics, traces, and patterns of behavior. Without observability, identifying and fixing anomalies, performance issues, detection of sensitive data leak, inaccuracies becomes difficult.
In this talk, we will discuss how we can develop complete end-to-end Observability for LLM using OpenLit.
Outcomes that LLM observability gives to enhance the performance
Response latency: How quickly the model responds to user queries.
Data leak detection: Are PII and sensitive data included in response?
Classified data seek rate: Effective guardrails for excluding the classified data. Tracking the frequency of such user queries and from whom?
Token usage: Tracking token consumption to manage operational costs.
Prompt effectiveness: Evaluating how well the crafted prompts generate the desired outputs.
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}