Operations in Financial Services—An Overview (3 of 6)

3.3. Performance Analysis Through Data Envelopment Analysis (DEA)

There are numerous studies on performance and productivity analyses of retail banking that are based on DEA. DEA is a technique for evaluating productivity measures that can be applied to service industries in general. It compares productivity measures of different entities (e.g., bank branches) within the same service organization (e.g., a large retail bank) to one another. Such a comparative analysis then boils down to the formulation of a fractional linear program. DEA has been used in many retail banks to compare productivity measures of the various branches with one another. Sherman and Gold (1985), Sherman and Ladino (1995), and Seiford and Zhu (1999) performed such studies for US banks; Oral and Yolalan (1990) performed such a study for a bank in Turkey; Vassiloglou and Giokas (1990), Soteriou and Zenios (1999a), Zenios et al. (1999), Soteriou and Zenios (1999b), and Athanassopoulos and Giokas (2000) for Greek banks; Kantor and Maital (1999) for a large Mideast bank; and Berger and Humphrey (1997) for various international financial services firms. These papers discuss operational efficiency, profitability, quality, stock market performance, and the development of better cost estimates for banking products via DEA. Cummins et al. (1999) use DEA to explore the impact of organizational form on firm performance. They compare mutual and stock property liability companies and find that in using managerial discretion and cost-efficiency stock companies perform better, and in lines of insurance with long payouts mutual companies perform better.

Cook and Seiford (2009) present an excellent overview of the DEA developments over the past 30 years and Cooper et al. (2007) provide a comprehensive textbook on the subject. For a good survey and cautionary notes on the pitfalls of improper interpretation and use of DEA results (e.g., loosely using the results for evaluative purposes when uncontrollable variables exist), see Metters et al. (1999). Zhu (2003) discusses methods to solve imprecise DEA (IDEA), where data on inputs and outputs are either bounded, ordinal, or ratio bounded, where the original linear programming DEA formulation can no longer be used.

Koetter (2006) discusses the stochastic frontier analysis (SFA) as another bank efficiency analysis framework, which contrasts to the deterministic DEA.

4. FORECASTING

Forecasting is very important in many areas of the financial services industry. In its most familiar form in which it presents itself to customers and the general public, it consists of economic and market forecasts developed by research and strategy groups in brokerage and investment management firms. However, the types of forecasting we discuss tend to be more internal to the firms and not visible from the outside.

4.1. Forecasting in the Management of Cash Deposits and Credit Lines

Deposit-taking institutions (e.g., commercial banks, savings and loan associations, and credit unions) are interested in forecasting the future growth of their deposits. They use this information in the process of determining the value and pricing of their deposit products (e.g., checking, savings, and money market accounts, and also CDs), for asset–liability management, and for capacity considerations. Of special interest to these institutions are demand deposits, more broadly defined as non-maturity deposits. Demand deposits have no stated maturity and the depositor can add to the balance without restriction, or withdraw from “on demand,” i.e., without warning or penalty. In contrast, time deposits, also known as CDs, have a set maturity and an amount established at inception, with penalties for early withdrawals. Forecasting techniques have been applied to demand deposits because of their relative non-stickiness due to the absence of contractual penalties. A product with similar non-stickiness is credit card loans. Jarrow and Van Deventer’s (1998) model for valuing demand deposits and credit card loans using an arbitrage-free methodology assumes that demand deposit balances depend only on the future evolution of interest rates; however, it does allow for more complexity, such as macroeconomic variables (income or unemployment), and local market or firm-specific idiosyncratic factors. Janosi et al. (1999) use a commercial bank’s demand deposit data and aggregate data for negotiable order of withdrawal (NOW) accounts from the Federal Reserve to empirically investigate Jarrow and Van Deventer’s model. They find demand deposit balances to be strongly autoregressive, i.e., future balances are highly correlated with past balances. They develop regression models, linear in the logarithm of balances, in which past balances, interest rates, and a time trend are predictive variables. O’Brien (2000) adds income to the set of predictive variables in the regression models. Sheehan (2004) adds month-of-the-year dummy variables in the regressions to account for calendar-specific inflows (e.g., bonuses or tax refunds), or outflows (e.g., tax payments). He focuses on core deposits, i.e., checking accounts and savings accounts; distinguishes between the behavior of total and retained deposits; and develops models for different deposit types, i.e., business and personal checking,NOW,savings,and moneymarket account deposits.

Labe and Papadakis (2010) discuss a propensity score matching model that can be used to forecast the likelihood of Bank of America’s retail clients bringing in new funds to the firm by subscribing to promotional offerings of CDs. Such promotional CDs carry an above-market premium rate for a limited period of time. Humphrey et al. (2000) forecast the adoption of electronic payments in the United States; they find that one of the reasons for the slow pace of moving from checks to electronic payments in the United States is the customers’ perceived loss of float. Many electronic payment systems now address this, by allowing for payment at the due date rather than immediately.

Revolving credit lines, or facilities, give borrowers access to cash on demand for short-term funding needs up to credit limits established at facility inception. Banks typically offer these facilities to corporations with investment grade credit ratings, which have access to cheaper sources of short-term funding, for example, commercial paper, and do not draw significant amounts from them except:

(i) for very brief periods of time under normal conditions,

(ii) when severe deterioration of their financial condition causes them to lose access to the credit markets, and

(iii) during system-wide credit market dysfunction, such as during the crisis of 2007–2009.

Banks that offer these credit facilities must set aside adequate, but not excessive, funds to satisfy the demand for cash by facility borrowers. Duffy et al. (2005) describe a Monte Carlo simulation model that Merrill Lynch Bank used to forecast these demands for cash by borrowers of their revolver portfolio. The model uses industry data for revolver usage by borrower credit rating, and assumes Markovian credit rating migrations, correlated within and across industries. Migration probabilities were provided by a major rating agency, and correlation estimates were calculated by Merrill Lynch’s risk group. The model was used by Merrill Lynch Bank to help manage liquidity risk in its multi-billion portfolio of revolving credit lines.

Forecasting the future behavior and profitability of retail borrowers (e.g., for credit card loans, mortgages, and home equity lines of credit) has become a key component of the credit management process. Forecasting involved in a decision to grant credit to a new borrower is known as “credit scoring,” and its origins in the modern era can be found in the 1950s. A discussion of credit scoring models, including related public policy issues, is offered by Capon (1982). Forecasting involved in the decisions to adjust credit access and marketing effort to existing borrowers is known as “behavioral scoring.” The book by Thomas et al. (2002) contains a comprehensive review of the objectives, methods, and practical implementation of credit and behavioral scoring. The formal statistical methods used for classifying credit applicants into “good” and “bad” risk classes is known as “classification scoring.” Hand and Henley (1997) review a significant part of the large body of literature in classification scoring. Baesens et al. (2003) examine the performance of standard classification algorithms, including logistic regression, discriminant analysis, k-nearest neighbor, neural networks, and decision trees; they also review more recently proposed ones, such as support vector machines and least-squares support vector machines (LS-SVM). They find LS-SVM, and neural network classifiers, and simpler methods such as logistic regression and linear discriminant analysis to have good predictive power. In addition to classification scoring, other methods include:

(i) “response scoring,” which aims to forecast a prospect’s likelihood to respond to an offer for credit, and

(ii) “balance scoring,” which forecasts the prospect’s likelihood of carrying a balance if they respond.

To improve the chances of acquiring and maintaining profitable customers, offers for credit should be mailed only to prospects with high credit, response, and balance scores. Response and balance scoring models are typically proprietary. Trench et al. (2003) discuss a model for optimally managing the size and pricing of card lines of credit at Bank One. The model uses account-level historical transaction information to select for each cardholder, through Markov decision processes, annual percentage rates, and credit lines that optimize the net present value of the bank’s credit portfolio.

4.2. Forecasting in Securities Brokerage, Clearing, and Execution

In the last few decades, the securities brokerage industry has seen dramatic change. Traditional wire-houses charging fixed commissions evolved or were replaced by diverse organizations offering full service, discount, and online trading channels, as well as research and investment advisory services. This evolution has introduced a variety of channel choices for retail and institutional investors. Pricing, service mix and quality, and human relationships are key determinants in the channel choice decision. Firms are interested in forecasting channel choice decisions by clients, because they greatly impact capacity planning, revenue, and profitability. Altschuler et al. (2002) discuss simulation models developed for Merrill Lynch’s retail brokerage to forecast client choice decisions on introduction of lower-cost offerings to complement the firm’s traditional full-service channel. Client choice decision forecasts were used as inputs in the process of determining the proper pricing for these new offerings and for evaluating their potential impact on firm revenue. The results of a rational economic behavior (REB) model were used as a baseline. The REB model assumes that investors optimize their value received by always choosing the lowest-cost option (determined by an embedded optimization model that was solved for each of millions of clients and their actual portfolio holdings). The REB model’s results were compared with those of a Monte Carlo simulation model. The Monte Carlo simulation allows for more realistic assumptions. For example, clients’ decisions are impacted not only by price differentials across channels, but also by the strength and quality of the relationship with their financial advisor, who represented the higher-cost options.


Free Webinar


Labe (1994) describes an application of forecasting the likelihood of affluent prospects becoming Merrill Lynch’s priority brokerage and investment advisory clients (defined as clients with more than US$250,000 in assets). Merrill Lynch used discriminant analysis, a method akin to classification scoring, to select high quality households to target in its prospecting efforts.

The trading of securities in capital markets involves key operational functions that include:

(i) clearing, i.e., establishing mutual obligations of counterparties in securities and/or cash trades, as well as guarantees of payments and deliveries, and

(ii) settlement, i.e., transfer of titles and/or cash to the accounts of counterparties in order to finalize transactions.

Most major markets have centralized clearing facilities so that counterparties do not have to settle bilaterally and assume credit risk to each other. The central clearing organization must have robust procedures to satisfy obligations to counterparties, i.e., minimize the number of trades for which delivery of securities is missed. It must also hold adequate, but not excessive, amounts of cash to meet payments. Forecasting the number and value of trades during a clearing and settlement cycle can help the organization meet the above objectives; it can achieve this by modeling the clearing and settlement operation using stochastic simulation. A different approach is used by de Lascurain et al. (2011): they develop a linear programming method to model the clearing and settlement operation of the Central Securities Depository of Mexico and evaluate the system’s performance through deterministic simulation. The model’s formulation in de Lascurain et al. (2011) is a relaxation of a mixed integer programming (MIP) formulation proposed by Gu¨ntzer et al. (1998), who show that the bank clearing problem is NP-complete. Eisenberg and Noe (2001) include clearing and settlement in a systemic financial risk framework.

4.3. Forecasting of Call Arrivals at Call Centers

Forecasting techniques are also used in various other areas within the financial services sector; for example, the forecasting of arrivals in call centers, which is a crucial input in the personnel scheduling process in call centers (to be discussed in a later section). To set this in a broader context, refer to the framework of Thompson (1998) for forecasting demand for services. Recent papers that focus on forecasting call center workload include a tutorial by Gans et al. (2003), a survey by Aksin et al. (2007), and a research paper by Aldor-Noiman et al. (2009).

The quality of historical data improves with the passage of time because call centers become increasingly more sophisticated in capturing data with every nuance that a modeler may find useful or interesting. Andrews and Cunningham (1995) describe the auto-regressive integrated moving average (ARIMA) forecasting models used at L.L. Bean’s call centers. Time series data used to fit the models exhibit seasonality patterns and are also influenced by variables such as holiday and advertising interventions. Advertising and special calendar effects are addressed by Antipov and Meade (2002). More recently, Soyer and Tarimcilar (2008) incorporate advertising effects by modeling call arrivals as a modulated Poisson process with arrival rates being driven by customer calls that are stimulated by advertising campaigns. They use a Bayesian modeling framework and a data set from a call center that enables tracing calls back to specific advertisements. In a study of Fedex’s call centers, Xu (2000) presents forecasting methodologies used at multiple levels of the business decision-making hierarchy, i.e., strategic, business plan, tactical, and operational, and discusses the issues that each methodology addresses. Methods used include exponential smoothing, ARIMA models, linear regression, and time series decomposition.

At low granularity, call arrival data may have too much noise. Mandelbaum et al. (2001) demonstrate how to remove relatively unpredictable short-term variability from data and keep only predictable variability. They achieve this by aggregating data at higher levels of granularity, i.e., progressively moving up from minute of the hour to hour of the day to day of the month and to month of the year. The elegant textbook assumption that call arrivals follow a Poisson process with a fixed rate that is known or can be estimated does not hold in practice. Steckley et al. (2009) show that forecast errors can be large in comparison to the variability expected in a Poisson process and can have significant impact on the predictions of long-run performance; ignoring forecast errors typically leads to overestimation of performance. Jongbloed and Koole (2001) found that the call arrival data they had been analyzing had a variance much greater than the mean, and therefore did not appear to be samples of Poisson distributed random variables. They addressed this “overdispersion” by proposing a Poisson mixture model, i.e., a Poisson model with an arrival rate that is not fixed but random following a certain stochastic process. Brown et al. (2005) found data from a different call center that also followed a Poisson distribution with a variable arrival rate; the arrival rates were also serially correlated from day to day. The prediction model proposed includes the previous day’s call volume as an auto-regressive term. High intra-day correlations were found by Avramidis et al. (2004), who developed models in which the call arrival rate is a random variable correlated across time intervals of the same day. Steckley et al. (2004) and Mehrotra et al. (2010) examine the correlation of call volumes at later periods of a day to call volumes experienced earlier in the day for the purpose of updating workload schedules.

Methods to approximate non-homogeneous Poisson processes often attempt to estimate the arrival rate by breaking up the data set into smaller intervals. Henderson (2003) demonstrates how a heuristic that assumes a piecewise constant arrival rate over time intervals with a length that shrinks as the volume of data grows produces good arrival rate function estimates. Massey et al. (1996) fit piecewise linear rate functions to approximate a general time-inhomogeneous Poisson process. Weinberg et al. (2007) forecast an inhomogeneous Poisson process using a Bayesian framework, whereby from a set of prior distributions they estimate the parameters of the posterior distribution through a Monte Carlo Markov Chain model. They forecast arrival rates in short intervals of 15–60 minutes of a day of the week as the product of a day’s forecast volume times the proportion of calls arriving during an interval; they also allow for a random error term.

Shen and Huang (2005, 2008a, b) developed models for inter-day forecasting and intra-day updating of call center arrivals using singular value decomposition. Their approach resulted in a significant dimensionality reduction. In a recent empirical study, Taylor (2008) compared the performance of several univariate time series methods for forecasting intra-day call arrivals. Methods tested included seasonal autoregressive and exponential smoothing models, and the dynamic harmonic regression of Tych et al. (2002). Results indicate that different methods perform best under different lead times and call volumes levels. Forecasting other aspects of a call center with a significant potential for future research include, for example, waiting times of calls in queues, see Whitt (1999a, b) and Armony and Maglaras (2004).

1 | 2 | 3 | 4 | 5 | 6 | Next Page

Emmanuel D. (Manos) Hatzakis, Suresh K. Nair, and Michael Pinedo

UConn-ad

NYSSA Job Center Search Results

To sign up for the jobs feed, click here.


UPCOMING EVENT
MF16

NYSSA Market Forecast™: Investing In Turbulent Times
January 7, 2016

Join NYSSA to enjoy free member events and other benefits. You don't need to be a CFA charterholder to join!


CFA® EXAM PREP

CFA® Level I 4-Day Boot Camp
Midtown

Thursday November 12, 2015
Instructor: O. Nathan Ronen, CFA

CFA® Level II Weekly Review - Session A Monday
Midtown

Monday January 11, 2016
Instructor: O. Nathan Ronen, CFA

CFA® Level III Weekly Review - Session A Wednesday
Midtown

Wednesday January 13, 2016
Instructor: O. Nathan Ronen, CFA

CFA® Level III Weekly Review - Session B Thursday
Thursday January 21, 2016
Instructor: O. Nathan Ronen, CFA

CFA® Level II Weekly Review - Session B Tuesday
Thursday January 26, 2016
Instructor: O. Nathan Ronen, CFA