Transformers are everywhere! But how to serve them? How do you leverage serverless to get scalability without any worries? Isn’t serverless used for light applications?. How to get the best latencies with your serverless? I will be sharing answers to these questions in my talk.
Slides - https://bit.ly/serverless-transformers
A self-taught data scientist and open-source developer from India. He specialises in making Search & NLP solutions.
He runs a slack data science community http://maxpool.club and writes at https://pakodas.substack.com.
You can find his previous talks with PyData, WiMLDS & DAIR at http://talks.pratik.ai
Portfolio - http://pratik.ai
- Paradigms of deployment
- Live server
- Batch processing
- Serverless
- Benefits of serverless
- Deploying transformer models on Lambda
- Exposing API
- Versioning lambdas
- CI/CD with GitHub actions
- Runtime limitations and consequences
- Multi-tenant design for lambdas
- Conclusion
Learn to deploy transformers in production
Serverless can be really good for many scenarios
Get huge instant scalability with serverless
Tons of savings in cost and headache
Any level of audience and whole ML community
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}