Aug 2023
7 Mon
8 Tue
9 Wed
10 Thu
11 Fri 09:00 AM – 06:00 PM IST
12 Sat
13 Sun
Pradeep Rathore
Abstract
LLMs have been demonstrated to perform quite well in question answering tasks and have been shown to generate good answers based on the context provided. In many scenarios, training of LLM becomes challenging due to time and resource constraints. When it comes to adoption of LLM in large organizations, major problem arises due to the confidentially of data and scattered relevant information for question answering tasks. The problem becomes more severe because the relevant information is present among multiple files which are much larger in length as compared to the maximum context length of major LLMs.
In Asset Management, Research Analysts/Portfolio Managers need to go through large number of research reports both private and publicly available and need to extract relevant information and compare across companies, sectors, years, markets etc. We propose a solution to this problem where the user can do question answering with their reports which can be of different format, can vary from few to thousands in number and can be of varying sizes. We present some of our learnings while solving problems like handling of limited context window, utilizing information like metadata effectively, chunking strategies etc.
What will you get out of this talk:
• Deep dive into LLMs and prompt engineering
• Strength and limitations of LLMs for question answering tasks
• An Asset Management perspective on LLMs
Presenter:
Kunal Satija : Fidelity Investments
Pradeep Rathore : Fidelity Investments
Hosted by
Supported by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}