TY - UNPB
T1 - Re(Visiting) Large Language Models in Finance
AU - Rahimikia, Eghbal
AU - Drinkall, Felix
PY - 2024/9/21
Y1 - 2024/9/21
N2 - This study introduces a novel suite of historical large language models (LLMs) pre-trained specifically for accounting and finance, utilising a diverse set of major textual resources. The models are unique in that they are year-specific, spanning from 2007 to 2023, effectively eliminating look-ahead bias, a limitation present in other LLMs. Empirical analysis reveals that, in trading, these specialised models outperform much larger models, including the state-of-the-art LLaMA 1, 2, and 3, which are approximately 50 times their size. The findings are further validated through a range of robustness checks, confirming the superior performance of these LLMs.
AB - This study introduces a novel suite of historical large language models (LLMs) pre-trained specifically for accounting and finance, utilising a diverse set of major textual resources. The models are unique in that they are year-specific, spanning from 2007 to 2023, effectively eliminating look-ahead bias, a limitation present in other LLMs. Empirical analysis reveals that, in trading, these specialised models outperform much larger models, including the state-of-the-art LLaMA 1, 2, and 3, which are approximately 50 times their size. The findings are further validated through a range of robustness checks, confirming the superior performance of these LLMs.
UR - https://fintext.ai/
U2 - 10.2139/ssrn.4963618
DO - 10.2139/ssrn.4963618
M3 - Working paper
BT - Re(Visiting) Large Language Models in Finance
PB - SSRN
ER -