Academics and practitioners often cite back-testing and model validation as areas where progress with AI and machine learning will likely be soon visible

Academics and practitioners often cite back-testing and model validation as areas where progress with AI and machine learning will likely be soon visible. Banks are considering machine learning to make sense of large, unstructured and semi-structured datasets and to police the outputs of primary models. Back-testing is important because it is traditionally used to evaluate how well banks’ risk models are performing.
In the last years, US and European prudential regulators focused on back-testing and validation used by banks by providing guidance on model risk management. Using a range of financial settings for back-testing allows for consideration of shifts in market behaviour and other trends, hopefully reducing the potential for underestimating risk in such scenarios.
Some applications are already live. For instance, one global corporate and investment bank is using unsupervised learning algorithms in model validation. Its equity derivatives business has used this type of machine learning to detect anomalous projections generated by its stresstesting models. Each night, these models produce over three million computations to inform regulatory, internal capital allocations and limit monitoring. A small fraction of these computations are extreme, and knocked out of the normal distribution of results by a quirk of the computation cycle or faulty data inputs.
Unsupervised learning algorithms help model validators in the ongoing monitoring of internal and regulatory stress-testing models, as they can help determine whether those models are performing within acceptable tolerances or drifting from their original purpose. They can also provide additional input to operational risk models, such as the vulnerability of organisations to cyber-attacks.
Similarly, AI and machine learning techniques are also being applied to stress testing. The increased use of stress testing following the financial crisis has posed challenges for banks as they work to analyse large amounts of data for regulatory stress tests. One provider of AI and machine learning tools has worked closely with a large financial institution to develop tools to assist them in modelling their capital markets business for bank stress testing. The tools developed aim to limit the number of variables used in scenario analysis for the loss given default and probability of default models. By using unsupervised learning methods to review large amounts of data, the tools can document any bias associated with selection of variables, thereby leading to better models with greater transparency.
However, the areas where AI can be used are roughly classified into the following five categories.
• Screening Creation of credit models for screening business loan applications, credit card loan applications, and housing loan applications, as well as reduction of clerical workload

Loan applications are screened using Heterogeneous Mixture Learning. First attribute data is collected on cases where payments were delinquent or the loan fell into default. Next, the data is fed into the Heterogeneous Mixture Learning system so that it can learn from it. Then, the system automatically generates prediction formulas showing what screening items were defaulted on and under what conditions.

Subsequently, when a screening target’s data is fed into these prediction formulas which have been developed using the previously partitioned data, the system can determine whether or not to accept the loan application and provide you with the basis for that judgment. Heterogeneous Mixture Learning can help improve competitiveness by reducing costs and screening time, while its most distinctive aspect – that the basis for its judgment can be known, provides the loan manager with useful reference material when it comes to making a final decision. The foundational trends for these judgments also make it possible to understand where improvements need to be made and can also lead to new product development.

• Detection of fraud such as fraudulent use of credit cards and cash cards, fraudulent insurance claims, illegal transactions, and transfer scams

RAPID Machine Learning is used to detect fraudulent financial transactions. RAPID uses the same learning process as Heterogeneous Mixture Learning. It analyzes previous data in order to build a model that will allow it to predict the future. In this case, data on transactions that were found to be fraudulent in the past needs to be collected and analyzed first. When there is not enough learning data – that is, data on transactions that were definitively concluded to be fraudulent – it can be supplemented with data on transactions deemed suspicious by investigators and systems. All of this data is then fed into the RAPID Machine Learning system, which uses multi-layer neural networks to generate prediction models while automatically extracting the patterns and characteristics of fraudulent transactions at a level of precision only deep learning is capable of.

When we feed the transaction data that we want the system to adjudicate into these prediction models, it outputs the degree – with score values – to which the data matches the characteristics of fraudulent transactions. The higher the score value the system outputs, the higher the likelihood that the transaction is fraudulent. Compared to the binary judgment using fixed threshold values, this makes it possible to prioritize investigation targets by turning degrees of suspicion into scores. Moreover, when this is combined with unstructured data such as images and text, it also becomes possible to discover new tendencies that have been thus far overlooked. By providing investigator with a powerful and reliable tool for assessing the likelihood of fraud, we anticipate that this system will significantly reduce the difficulty of fraud investigations.

• Marketing and forecasting Numerical value prediction to achieve maximum impact at minimum cost, including promotion prediction, demand prediction, and stock price prediction

The well-known efficient-market hypothesis (EMH) suggests that stock prices reflect all currently available information and any price changes based on the newly revealed relevant information. Stock market prediction is the area of extreme importance in the field of finance. In finance, stock market and its patterns are extremely unstable in nature. Investors and market analysts study the relation of market news with the trend in the movement of market behaviour and used to plan for buying or selling their invests accordingly. News articles can play an important role in predicting the movement of the stock prices as movement of the market prices depends on the response of the investors to the news articles.

Previously, there is lot of scientific work trying to predict the impact of news on a stock price, much of this uses basic features (such as bags- of-words, named entities etc.), and doesn’t capture structured entity- relation information, and hence lacks accuracy. We build an attention based neural network model (ATT-NN) to predict the impact of a news on a stock using news articles, stocks meta information and stocks moving average as inputs. Then the inputs are encoded in their respective representational formats using long short term memory cell and computing the correlations between news articles and stock. We then predict whether a stocks price will increase or decrease by 0.4% of the current value or will have impact less than the specified percentage.

• Matching and recommendation Creation of opportunities such as aptitude assessment for human resource management and recruitment, M;A recommendations, investment advice (robo-advisors), and product purchase recommendations

The general consensus among experts is that humans will remain indispensable. The human touch will remain critical, as advisors will still need to reassure customers during difficult financial times and persuade them with helpful solutions. A study performed by consulting firm Accenture revealed that 77% of wealth management clients trust their financial advisors while 81% indicate that face-to-face interaction is important. For clients with complex investment decisions, the hybrid advisory model, which couples computerized services with human advisors, is gaining traction.
While financial advisors will remain central, robo-advisors may cause shifts in their job responsibilities. With AI managing repetitive tasks, investment managers might take on the responsibilities of a data scientist or engineer, such as maintaining the system. Humans may also focus more on client relationship-building and explaining the decisions the machine has made.

• Collection and analysis of large volumes of data visualization such as analysis of customer comments directed to contact centers, automation of help desks, social data on social media, and analysis of news articles

AI is capable of rapidly absorbing know-how and knowledge that takes humans many years to accumulate. It is precisely for this reason that AI is being increasingly applied in financial institutions. Comments from customers that are recorded in contact center histories and entered in questionnaires are to be analyzed. Assuming that words expressing gratitude such as “thank,” “grateful,” and “appreciate” are used as keywords for searching, for example, it is possible that we could find sentences with opposite meanings like, “you didn’t say you were grateful to me.” Excluding hits like this one at a time would be extremely time consuming. However, because Textual Entailment technology is capable of recognizing meanings, it can distinguish a sentence like this from sentences containing the meaning of gratitude and drop it from the results. In this way, sentences with the desired meaning specified can be correctly extracted. The extracted sentences can also be classified into groups such as acknowledgments, claims, and opinions to facilitate analysis.

For example, when Textual Entailment is applied to news sites or social media sites, it can judge whether articles and postings about a certain product have positive or negative connotations and classify and aggregate them accordingly. This makes it possible to achieve almost real-time understanding of customer comments about products and services, helping hasten feedback into services.

Banks are making big bets with their client-facing virtual assistants, known as chatbots. While the early versions of chatbots will only be able to answer basic questions about spending limits and recent transactions, future versions are slated to become full-service virtual assistants that can make payments and track budgets for consumers. Engaging with customers can translate into significant cost savings, but human interactions are also undoubtedly more complex than straightforward number crunching. Critics point to chatbots’ lacking of empathy and understanding, which humans might need when dealing with difficult financial decisions and situations. For this technology, AI technology of natural language processing will be essential for processing and responding to personalized customer concerns and wishes. Here are some real-life instances:

o In October 2016, both Bank of America and MasterCard unveiled their chatbots, Erica and Kai, respectively. These will allow customers to ask questions about their accounts, initiate transactions and receive advice via Facebook Messenger of Amazon’s Echo tower.
o Capital One has also launched their own chatbot, named “Eno” which is an anagram for “One”. Eno enables customers to chat with the bank using text-based language to pay bills and retrieve account Information.
o Barclays is also getting in on the action. In describing Bank of America’s new chatbot, Michelle Moore, the head of digital banking at Bank of America declared, “What will banking be in two, three or four years? It’s going to be this”.

Here are some concrete, more narrowed-down instances of deploying Artificial Intelligence to maintaining integrity of Financial Transactions of the future & automating the complete ecosystem:

Approving Credit and assigning Credit Limits to customers
In the past, expert systems have been used to exploit a knowledge base and make some decisions. However, with unreliable or incomplete information, such expert systems might fail in achieving expected quality and performance standards. Hence, by modelling the way humans learn, an Artificial Neural System can be leveraged to perform the decision-making and aid varied financial tasks.
However, tasks requiring accuracy of computational results or intensive calculations are best left to conventional computer applications. Artificial neural networks or Artificial Neural Systems(ANS) are best applied to problem environments that are highly unstructured, require some form of pattern recognition and may involve incomplete or corrupted data.
An ANS might, for example, be created to simulate the behavior of a firm’s credit customers as economic conditions change. The input vectors could consist of economic data and customer-specific data, and the output could be the expected purchase/payment behavior of the customer given the input conditions. Training data would be based on actual behavior of customers in the past. Such a system would be useful for planning for bad-debt expenses and the cyclical expansion and contraction of accounts receivable and for evaluating the credit terms and limits assigned to individual customers.
While the task of approving customers for credit and assigning credit limits is generally delegated to lower-level financial staff, it is still a labor-intensive and time-consuming process that has a significant impact on the profitability of most companies. Approval procedures based on credit scoring can be successfully implemented with conventional computer equipment and software, but such systems cannot incorporate the subjective and otherwise nonquantifiable elements of a human’s decision process. In addition, much of the information concerning customers does not come to the decision-maker in a standard format (e.g., Dun & Bradstreet credit reports have a standardized form, but financial statements dis- play a remarkable diversity).
An ANS could be trained using customer data as the input vector and the actual decisions of the credit analyst as the desired output vector. The objective of the system would be to mimic the human decision-maker in granting or revoking credit and setting credit limits. In addition, the system would be able to deal with the diversity of input information without requiring that the information be restated in a standard form.

Currency Exchange Rates prediction, that too with minimal training sets delivering higher accuracy
Contrasting to typical beliefs, it has been shown how a minimal training set input to a neural network improves currency exchange rates prediction accuracy. For most exchange rate predictions, a maximum of two years of training data produces the best neural network forecasting model performance. The smaller-than-expected training set sizes that produce the optimal neural network forecasting performance led to the induction of the Time Series Recency Effect. The Time Series Recency Effect states that constructing models with data that is closer in time to the data that is to be forecast by the model produces a higher quality models.
The Time Series Recency Effect has several direct benefits for both neural network researchers and developers. These benefits are:
• Evidence that disproves the time-honored heuristic method of using the greatest quantity of data available for producing time series model.
• Production of higher quality (with better forecasting performance) models through the use of smaller quantities of data. This will enable improvements in current neural network time series models of 5 percent (or more).
• A net reduction in the development costs of neural network time series models, since less training data is required.
• A net reduction in development time, since smaller training set sizes typically require fewer training iterations to accurately model the training data.
The neural networks that utilize two-year rolling training groups for forecasting the exchange rate do not always achieve greater than 60 percent forecasting accuracy. Eight out of the 21 (nearly 40 percent) neural networks trained with only two years of data do achieve forecast accuracies of greater than 60 percent and two other neural network forecasting models are at a nearly 60 percent level. However, the purpose of examining the rolling data sets is to compare the two-year training set models with the larger training set models. Each of the neural networks trained with only a two-year sample size consistently outperforms the corresponding neural network trained with the largest possible training sample size for any given test set. The average difference for all nine comparisons is 8.6 percentage points, with a maximum difference of 16.37 percentage points in favor of the two-year training set models.

Leveraging Genetic algorithms and Neural Fuzzy Inference System to predict underpricing in IPOs
Neural fuzzy inference systems have been widely adopted in various application domains as decision support systems. Especially in real-world scenarios such as decision making in financial transactions, the human experts may be more interested in knowing the comprehensive reasons of certain advices provided by a decision support system in addition to how confident the system is on such advices.
An integrated autonomous computational model termed genetic algorithm and rough set incorporated neural fuzzy inference system can be used to predict underpricing in initial public offerings (IPOs).
The difference between a stock’s potentially high value and its actual IPO price is referred as money-left-on-the-table, which has been extensively studied in the literature of corporate finance on its theoretical foundations, but surprisingly under-investigated in the field of computational decision support systems. Specifically, GARSINFIS (a genetic algorithm class) is used to derive interpretable rules in determining whether there is money-left-on-the-table in IPOs to assist the investors in their decision making. For performance evaluations, one needs to balance between accuracy and interpretability in GARSINFIS by simply altering the values of several coefficient parameters using well-known datasets.