en flag +1 214 306 68 37

Large Language Models (LLMs) for Finance

Use Cases, Architecture, Features, Costs

ScienceSoft relies on 35 years of experience in artificial intelligence and 19 years in financial software development to design robust LLM solutions for financial data processing, personalized customer service, and guided decision-making.

Large Language Models (LLM) in Financial Services
Large Language Models (LLM) in Financial Services

Large Language Models in Financial Services: Summary

Large language models (LLMs) are used in financial services to introduce convenient customer self-service, automate data and document processing, streamline operational decision-making, and simplify the research of regulatory guidelines.

By automating data capture, consolidation, validation, and summarizing tasks with LLMs, financial service providers can boost data processing speed by up to 50 times. Combined with 20–30% higher employee productivity due to automation, this means that LLMs have the potential to reduce back-office operational costs by up to 40%. Thanks to LLM’s data mining and anomaly detection capabilities, they can also bring up to a 300% increase in fraud detection rates.

When it comes to customer service, LLM-based financial assistants provide personalized responses to user inquiries in seconds, boosting customer satisfaction (CSAT) and loyalty. With up to 90% of customer issues resolved by LLM assistants, finance companies can expect a 50%+ decrease in the servicing teams’ workload.

The Market of LLMs for Financial Services

The market of large language models is projected to grow at a CAGR of 32.1% and reach $61.74 billion by 2032. The banking, financial services, and insurance (BFSI) industry is named among the major contributors to the steady market growth. The segment of financial LLMs is anticipated to witness the highest CAGR, driven by the BFSI sector’s need to increase data processing efficiency, improve the accuracy and profitability of operational decisions, enhance customer experiences, and simplify regulatory compliance.

The potential ROI for implementing GenAI and LLM in finance is immense. McKinsey estimates that in the banking industry alone, technology-driven productivity gains may bring a 9–15% increase in operating profits, or $200–340 billion in annual cumulative value.

According to a 2024 survey by The Alan Turing Institute, the majority of BFSI leaders plan to integrate GenAI and LLMs into their servicing flows within two years, and 70%+ of financial institutions are currently in the proof-of-concept stage for LLM solutions.

What LLMs Can Do for Financial Services

Advanced natural language processing (NLP) and data analytics capabilities of LLMs allow financial organizations to automate up to 90% of tasks across a range of general and domain-specific cases.

Universal use cases

Natural language communication

Employees and customers can chat with LLM-powered solutions using regular text messages. LLMs maintain real-time human-like conversation and adapt to the user’s language and communication style. When powered by speech synthesis, LLMs can support voice communication, real-time phone and VoIP call transcription, and call auto-dispositioning.

Customer document parsing

LLMs can recognize and categorize customer documents (service applications, proofs of solvency, third-party contracts, etc.) and extract the required data. They can then validate the data (e.g., reconcile against historical records, cross-reference with the data from external sources, verify against KYC requirements) and report the revealed discrepancies.

Financial document review

When involved in checks of internal documents (e.g., service agreements, invoices, reports), LLMs can redline factual and stylistic gaps and identify the pieces that are not compliant with regulatory standards like SEC, GLBA, ECOA, TRID, and NAIC. Compliance specialists can use LLMs to screen BFSI legal documents and recap new document formats, operational rules, and data privacy requirements applicable to the company’s operations.

Financial data consolidation

LLMs can aggregate the data relevant to a particular business aspect (e.g., customer due diligence, risk assessment, mortgage closing, reporting), summarize and structure it according to user-defined rules, and fill out pre-built document templates with relevant data. Speech-to-text LLMs can capture service-critical details (e.g., requests for quotes, service term adjustments, claims) during real-time customer calls and compose call briefings.

Financial fraud detection

LLMs can instantly spot deviations that may indicate fraud attempts across financial documents, interactions, and transactions using semantic analysis. Suspicious activities are auto-classified by type (e.g., document forgery, compliance breach) and supposed intent (data theft, money laundering, etc.) and reported to fraud investigators.

Intelligent recommendations

BFSI professionals can ask LLMs to reason on the obtained data, e.g., project loan defaults based on borrower behavior, recommend the optimal investment strategies based on the asset owners’ financial goals, or suggest improvements in financial product terms based on market insights. This feature can also be introduced as self-service on the customer side.

Financial data augmentation

To give the financial firm additional feeds for informed decision-making, LLMs can scan particular web sources, capture case-relevant recent information (e.g., on customer behaviors, investment sentiment, insured risks), and present insights in an easy-to-read form.

Financial data synthesis

LLMs can produce new content from the financial company’s existing knowledge. For example, an LLM can analyze banking call center interactions and draft topical response memos for agents. Also, LLMs can mine data patterns from historical transactions and generate artificial data sets for modeling needs.

Industry-specific LLM use cases

Banking

  • Processing account opening and modification requests.
  • Scenario modeling for treasury transactions.
  • Trade finance document verification.
  • Self-service budgeting and expense tracking for banking customers.
  • Banking contract drafting and proofreading.

Gains to expect: a 10%+ increase in new bank accounts opened, a 20%+ improvement in staff productivity, and reduced financial risks.

Insurance

  • Risk data consolidation from traditional and alternative sources.
  • Defining risk factors that may affect policy pricing.
  • Claim evidence processing and crafting adjuster summaries.
  • Detection of fraudulent and illegitimate claims.
  • Claim triaging for payment.

Gains to expect: informed underwriting and claim decisions, optimized risk pricing, and up to 12x faster quote submissions and claim responses.

Lending

  • Borrower risk discovery and profiling.
  • Knowledge-augmented risk assessment.
  • Loan price benchmarking and optimization.
  • Borrower interaction analytics and delinquency prediction.
  • Planning personalized debt collection strategies.

Gains to expect: accurate risk scoring, up to 2.5x quicker loan closing, a 5–35% increase in written loans, and a 20% reduction in defaults.

Investment

  • Gathering and summarizing capital market data online.
  • Investment sentiment analysis.
  • Data-driven portfolio construction and optimization.
  • Wealth management and trading advisory.
  • Data synthesis for algorithmic trading models.

Gains to expect: early capture of high-yield market signals, higher investment profitability, and accurate wealth management advice.

Ways to Adapt LLMs for Financial Services

Pretrained LLMs like GPT-4, Claude, and Gemini are sufficient for general-purpose tasks like designing training materials on known financial concepts or crafting BFSI marketing content. However, to effectively address specialized financial tasks, LLMs require enhancement with professional knowledge and each company’s proprietary data.

Below, ScienceSoft’s AI consultants outline four popular approaches to adapting LLMs for finance and describe their benefits, limitations, and prominent application areas:

Prompt engineering

Retrieval-augmented generation (RAG)

Parameter-efficient fine-tuning (PEFT)

LLM retraining and full fine-tuning

Essence

Designing tailored prompt templates with built-in context and instructions to get specific responses from LLMs. Doesn’t involve changing LLM parameters.

Implementing mechanisms for retrieving prompt-relevant proprietary financial data and feeding this data to LLM together with the prompt. No changes to the original LLMs are made.

Adjusting a small set of LLM parameters to add finance-specific reasoning. Low-rank approximation can be applied to reduce the scope of parameters.

Retraining a pre-built LLM on a labeled custom dataset. All LLM parameters and weights are set and tuned from scratch.

Benefits

A quick and economical way to make an LLM aware of the firm’s brand identity, offerings, and customer service workflows.

Full explainability of LLM outputs. Ease of enforcing compliance.

Achieving strong LLM affinity with the target financial domain without costly retraining.

The ability to introduce custom financial reasoning logic.

Access to the company’s most recent internal data and policies

Requires continuous model tuning.

Requires continuous model retraining.

Access to the most recent external data (compliance requirements, market, etc.)

Depends on the LLM provider’s approach to model retraining.

Depends on the LLM provider’s approach to model retraining.

Depends on the LLM provider’s approach to model retraining.

Best for

Automated technical support for BFSI customers and employees.

Automating tasks that require access to the most recent information, such as document processing or sanction screening.

Automating tasks that deal with established financial service policies, e.g., KYC, fraud detection, or customer guidance of core financial products.

Developing brand-new financial LLM software products like AI @ Morgan Stanley Debrief.

Unfeasible for

Tasks requiring access to the financial firm’s most recent internal operational and financial data.

Personalized communication tasks (adding prompt engineering to RAG would bridge this gap).

Market discovery for product design, service pricing, portfolio planning, and other tasks requiring access to dynamic external data.

Corporate LLM implementations due to high costs.

Costs

$

$$

$$$

$$$$$

Not Sure Which Approach Is Best for Your Case?

ScienceSoft’s data scientists and financial IT consultants are ready to discuss your case and assess the feasibility of several LLM adaptation methods for your case.

Financial LLM Solution Architecture

Based on ScienceSoft’s experience, applying cost-effective RAG and prompt engineering methods is, in most cases, enough to ensure precise and relevant LLM responses. Below, our consultants share a sample architecture of RAG-enabled financial LLM solutions we create:

Financial LLM architecture 

Users interact with LLMs via role-based LLM apps (for account managers, underwriters, claim specialists, BFSI customers, etc.). The app can be built as a standalone solution, implemented on top of existing software (e.g., a loan management system, an insurance portal, a mobile banking app), or launched as a browser extension.

A user’s request (prompt) is instantly processed in the app’s back end (orchestrator). The orchestrator triggers the financial company’s data storage to provide prompt-relevant structured data (names, dates, rates, indices, etc.). In parallel, the orchestrator sends a request to the RAG embedding model to retrieve unstructured contextual data (e.g., financial documents, customer interaction histories, regulatory policies). To be semantically searchable, unstructured data first needs to be cleansed, chunked, converted to multidimensional vectors, and stored in a vector database (ScienceSoft uses a metadata vectorization pipeline to automate these processes). The results of the hybrid search are rescored and merged into a single optimal set of outputs by the reranking model.

The orchestrator combines the gathered contextual data with the original prompt and puts them into a pre-engineered prompt template. We create custom prompt templates considering frequent inquiry topics, finance specifics, and prompt length limits defined by LLM providers. The enriched prompt is routed to the integrated LLM. The Large Language Model Operations (LLMOps) framework, which is incorporated into the orchestration layer, enables continuous communication between the LLM app, models, and contextual data sources.

The pretrained LLM analyzes the prompt and provides its response alongside links to source data and citations. The inference is logged and validated in the orchestrator and, if accurate and relevant, submitted to the user. The user provides feedback on the value of the received response, which can be further used for LLM reinforcement learning.

ScienceSoft’s Featured Financial LLM Project

LLM-Supported Smart Search for Mobile Banking App Users

In just 4 weeks, ScienceSoft provided a top high-tech bank with a functional concept, a technical design, and an implementation plan for an LLM-powered smart search solution. Our advice on the cost-effective tech stack and hardware ensured optimized project investments.

Based on our estimates, bringing LLM-supported search in mobile banking apps at scale will give our client the possibility to boost CSAT by 7%+, cut the servicing teams’ workload by over 60%, and increase financial product cross-selling potential.

Techs and Tools We Use to Implement LLMs for Finance

Top Concerns About LLMs for Financial Services, Addressed

Issue target Issue target Target arrow

ISSUE

FIXED

LLM providers may inadvertently disclose sensitive financial data.

Include explicit contractual clauses that prohibit the use of your data for training LLMs, anonymize and encrypt data sent to LLMs, and apply privacy-preserving prompt tuning (RAPT) methods.

Issue target Issue target Target arrow

ISSUE

FIXED

LLMs might hallucinate and present incomplete, incorrect, or outdated financial data.

Employ the latest versions of multiple LLMs, each fitting best for particular tasks. Give LLMs access to up-to-date business data using RAG and set up LLM response validation mechanisms.

Issue target Issue target Target arrow

ISSUE

FIXED

LLM data processing and reasoning logic might misalign with regulatory requirements.

Give LLMs access to up-to-date regulatory policies via RAG, incorporate compliance rules into prompt templates, and implement automated compliance checks for LLM outputs.

Issue target Issue target Target arrow

ISSUE

FIXED

The opaque logic of LLM outputs might complicate the proof of ethical conduct.

Incorporate requirements for source citations into prompt templates and use techniques like LIME and SHAP to achieve the transparency, traceability, and easy interpretability of LLM logic.

Issue target Issue target Target arrow

ISSUE

FIXED

LLMs risk sensitive data exposure, compliance breaches, and misuse through malicious queries.

Use federated learning to obscure direct associations between sensitive data and LLMs, apply mechanisms to detect malformed queries, and implement filtering systems to block poor-quality outputs.

Issue target Issue target Target arrow

ISSUE

FIXED

High complexity and costs of financial LLM software implementation.

Rely on proven LLM services, frameworks, and OOTB components to streamline development and optimize costs. Prioritize vendors proficient in AI and fintech to minimize project risks.

ScienceSoft’s Senior Data Scientist

Once your financial LLM app is in place, you need to continue its improvement to retain quality over time. Make sure your LLMOps toolkit enables real-time model monitoring and issue spotting so that you can implement corrections in a timely manner. Metrics like coherence, factuality, groundedness, and fairness may give you an idea of LLM performance and accuracy gaps. Additionally, consider setting up a feedback loop for finance specialists and customers to analyze user opinions on model outputs when planning fine-tuning and retraining activities.

Costs of Implementing an LLM Solution for Financial Services

The cost of implementing an LLM solution for financial services may vary from $250,000 to $1,000,000+, depending on solution complexity, model enhancement approach, architectural and tech stack choices, and security and compliance requirements.

Here are ScienceSoft’s sample estimates for various scenarios:

$250,000–$350,000

An LLM-powered chatbot for financial customers, using RAG to accommodate the company’s specifics.

$300,000–$500,000+

An LLM copilot for finance professionals, which was adapted to sectoral knowledge using RAG (a minor degree of PEFT may be needed).

$1,000,000+

An LLM-based financial assistant, trained additionally on a proprietary dataset to introduce brand-new AI automation features for commercial use.

Finance LLM Consulting and Implementation With ScienceSoft

In AI software development since 1989 and in financial IT since 2005, ScienceSoft engineers tailored LLM solutions for quick, accurate, and efficient BFSI operations. In our projects, we rely on robust quality management and security management systems backed up by ISO 9001 and ISO 27001 certifications.

Financial LLM consulting

We design the optimal feature set, secure architecture, and cost-effective tech stack for your LLM solution. You also receive a detailed project plan with cost and time estimates and a pragmatic risk mitigation plan for smooth and predictable LLM implementation.

I’m interested

Financial LLM implementation

We handle the project end to end, from LLM app development and testing to LLM integration and adapting for your business. Our PMs take 100% responsibility for the project management tasks. You get an MVP of your financial LLM solution in 3–5 months.

I’m interested

Financial LLM maintenance

Our team takes over the continuous maintenance of your LLM solution and can also provide user support. We monitor LLM accuracy, gather user feedback, and fine-tune the model(s) for higher precision. Also, we can audit your existing LLM app and help revamp the solution.

I’m interested

Our BFSI Clients About ScienceSoft’s Practices

We are impressed with ScienceSoft’s pragmatic project management, quality-first mindset, and transparent communication. They are strongly motivated to deliver maximum value with their services.

What stood out was ScienceSoft's proactive suggestions for cost-saving architecture design and tech stack solutions. Their input ensured we stayed within budget without compromising on software quality.

We especially appreciate ScienceSoft’s professional approach to security issues, which were among our main concerns due to strict regulations.

What makes ScienceSoft different

We achieve project success no matter what

ScienceSoft does not pass mere project administration off as project management, which, unfortunately, often happens on the market. We practice real project management, achieving project success for our clients no matter what.

See how we do it