Auquan is an AI startup that serves institutional investors and investment managers with curated news and documents to help them make better investment decisions.
In this presentation, I will discuss our approach for tuning a base language model for multiple tasks, such as determining noise in streaming news feeds, determining relevance, matching news to topics, and curating relevant documents.
I will walk through the process and pitfalls for tuning the language model for a general use case, including the process and metric for determining performance of the tuned model.
ML Engineers, early stage Data Scientists
- How to tune a base language model for multiple tasks
- Existing libraries for tuning language models
- Best practices and pitfalls for tuning language models
- Introduction
- About Auquan and our use case
- Problem description
- Language models for multi-tasking
- How we use language models
- Tuning an LM
- Using tuned models for embedding
- Best practices and pitfalls
- Conclusion/QA
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}