How to improve the accuracy of your financial language models
Imagine AI that understands financial data better than you do....
Financial institutions are increasingly aware of the potential value that can be found in unstructured data to spot issues or uncover trends in financial markets. As such, the Refinitiv Labs have been working on projects to apply traditional language models to financial data to unlock this potential.
One example, is a project in which Refinitiv Labs trained a BERT model leveraging Reuters news, which achieves a high-level of accuracy and returns a single document embedding from the text via API.
The API will enable financial data practitioners to fine-tune models across a number of NLP use cases and applications, without having to do upfront and expensive pre-training and embedding. The first users of Labs’ prototype model have seen better accuracy scores on their models thanks to the high-quality representation of the text generated by Labs’ underlying model.
In this hands-on lab, you will takeaway:
- Practical steps and learnings from building and training the state-of-the-art BERT language model
- Key financial use cases and applications that Refinitiv Labs’ new language model will enable
- How domain-specific language models can be integrated into NLP training pipelines
Complete the form to access on-demand now.
Duration: 30 minutes
Login: Shared after registration