The Fifth Elephant 2025 Annual Conference CfP
Speak at The Fifth Elephant 2025 Annual Conference
Submitted May 30, 2025
LLMs don’t have to live in the cloud. You can self host them even at home. Doing so doesn’t just provide control, but is also cost effective, and you can evaluate the trade-offs yourself by experimenting.
In this talk, we’ll discuss how to host LLMs locally using Ollama and what can be done to squeeze maximum performance out of the hardware you have. We’ll see how we can try out models with different parameter counts, quantization levels, and how they fare with frontier models, on a benchmark like the Bird-Bench for text-to-SQL tasks.
We’ll also compare the costs involved in running a task on cloud-hosted frontier models versus running the same task locally.
Yogi is a backend engineer at Nilenso. There, he has worked on building a job orchestration platform and an IoT based telemetry system. Outside of programming, he enjoys traveling, science fiction, astronomy, and self-hosting open source software.
Slides: https://docs.google.com/presentation/d/1H2pGCq35wirBWaGhwZQ0ASOV_hVq7AitqYPWRg0Z5qk/edit?usp=sharing
Blog covering the same topic:
https://blog.nilenso.com/blog/2025/05/27/experimenting-with-self-hosted-llms-for-text-to-sql/
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}