Operations in Financial Services—An Overview (5 of 6)

6.3. Waiting Lines in Call Centers

Since the late 1980s, banks have started to invest heavily in call center technologies. All major retail banks now operate large call centers on a 24/7 basis. Call centers have therefore been the subject of extensive research studies, see the survey papers by Pinedo et al. (2000), Gans et al. (2003), and Aksin et al. (2007). The queueing system in a call center is actually quite different from the queueing systems in a teller environment or in an ATM environment; there are a number of major differences. First, a customer now has no direct information with regard to the queue length and cannot estimate his waiting time; he must rely entirely on the information the service system provides him. On the other hand, the service organization in a call center has detailed knowledge concerning the customers who are waiting in queue. The institution knows of each customer his or her identity and how valuable (s)he is to the bank. The bank can now put the customers in separate virtual queues with different priority levels. This new capability has made the application of priority queueing systems suddenly very important; well-known results in queueing theory have now suddenly become more applicable, see Kleinrock (1976). Second, the call centers are in another aspect quite different from the teller and the ATM environments. The number of servers in either a teller environment or an ATM environment may typically be, say, at most 20, whereas the number of operators in a call center may typically be in the hundreds or even in the thousands. In the analysis of call center queues, it is now possible to apply limit theorems with respect to the number of operators, see Halfin and Whitt (1981) and Reed (2009). Third, the banks have detailed information with regard to the skills of each one of its operators in a call center (language skills, product knowledge, etc.). This enables the bank to apply skills-based-routing to the calls that are coming in, see Gans and Zhou (2003) and Mehrotra et al. (2009).

In call centers, typically, an enormous amount of statistical data is available that is collected automatically, see Mandelbaum et al. (2001) and Brown et al. (2005). The data that are collected automatically are much more extensive than the data that are collected in an ATM environment. It includes the waiting times of the customers, the queue length at each point in time, the proportion of customers that experience no wait at all, and so on.

Lately, many other aspects of queueing in call centers have become the subject of research. This special issue of Production and Operations Management as well as another recent special issue contain several such papers. For example, O ¨ rmeci and Aksin (2010) focus on the effects of cross selling on the management of the queues in call centers. They focus on the operational implications of cross selling in terms of capacity usage and value generation. Chevalier and Van den Schrieck (2008), and Barth et al. (2010) consider hierarchical call centers that consist of multiple levels (stages) with a time-dependent overflow from one level to the next. For example, at the first stage, the front office, the customers receive the basic services; a fraction of the served customers requires more specialized services that are provided by the back office. Van Dijk and Van der Sluis (2008) and Meester et al. (2010) consider the service network configuration problem. Meester et al. (2010) analyze networks of geographically dispersed call centers that vary in service and revenue-generation capabilities, as well as in costs. Optimally configuring a service network in this context requires managers to balance the competing considerations of costs (including applicable discounts) and anticipated revenues. Given the large scale of call center operations in financial services firms, the service network configuration problem is important and economically significant. The approach by Meester and colleagues integrates decision problems involving call distribution, staffing, and scheduling in a hierarchical manner (previously, these decision problems were addressed separately).

Even though most call center research has focused on inbound call centers, a limited amount of research has also been done on outbound call centers. Rising delinquencies and the importance of telemarketing has increased the need for outbound calling from call centers. Outbound calling is quite different from inbound calling, because the scheduling of calls is done by the call center rather than the call center being at the mercy of its customers. This presents unique challenges and opportunities. Bollapragada and Nair (2010) focus in this environment on improvements in contact rates of appropriate parties.


7.1. Preliminaries and General Research Directions

An enormous amount of work has been done on workforce (shift) scheduling in manufacturing. However, workforce scheduling in manufacturing is quite different from workforce scheduling in services industries. The workforce scheduling process in manufacturing has to adapt itself to inventory considerations and is typically a fairly regular and stable process. In contrast to manufacturing industries, workforce scheduling in the service industries has to adapt itself to a fluctuating customer demand, which in practice is often based on non-homogeneous Poisson customer arrival processes. In practice, adapting the number of tellers or operators to the demand process can be done through an internal pool of flexible workers, or through a partnership with a labor supply agency (see Larson and Pinker 2000).

As the assignment of tellers and the hiring of operators depend so strongly on anticipated customer demand, a significant amount of research has focused on probabilistic modeling of arrival processes, on statistical analyses of arrival processes, and on customer demand forecasting in order to accomplish a proper staffing. For probabilistic modeling of customer arrival processes, see Stolletz (2003), Ridley et al. (2004), Avramidis et al. (2004), and Jongbloed and Koole (2001). For statistical analyses of customer arrival data, see Brown et al. (2005) and Robbins et al. (2006). For customer demand forecasting, see Weinberg et al. (2007) and Shen and Huang (2008a, b).

From a research perspective, the personnel scheduling problem has been tackled via a number of different approaches, namely, simulation, stochastic modeling, optimization modeling, and artificial intelligence. The application areas considered included the scheduling of bank tellers as well as the scheduling of the operators in call centers. Slepicka and Sporer (1981), Hammond (1995), and Mehrotra and Fama (2003) used simulation to schedule bank tellers and call center operators. Thompson (1993) studied the impact of having multiple periods with different demands on determining the employee requirements in each segment of the schedule; see also Chen and Henderson (2001). Green et al. (2007) and Feldman et al. (2008) have addressed the problem from a stochastic point of view and have developed staffing rules based on queueing theory. So et al. (2003) use better staff scheduling and team reconfiguration to improve check processing in the Federal Reserve Bank.

7.2. Optimization Models for Call Center Staffing

Many researchers have considered this problem from an optimization point of view. An optimization model to determine the optimal staffing levels and call routing can take a more aggregate form or a more detailed form. Bassamboo et al. (2005) consider a model with m customer classes and r agent pools. They develop a linear program-based method to obtain optimal staffing levels as well as call routing. Not surprisingly, a significant amount of work has also been done on the more detailed optimization of the various possible shift structures (including even lunch breaks and coffee breaks). In large call centers, there are many different types of shifts that can be scheduled. A shift is characterized by the days of the week, the starting times at the beginning of the day, the ending time at the end of the day, as well as the timings of the lunch breaks and coffee breaks. Gawande (1996) (see also Pinedo 2009) solved this staffing problem in two stages. In the first stage of the approach, the so-called solid-tour scheduling problem is solved. (A solid tour represents a shift without any breaks; a solid tour is only characterized by the starting time at the beginning of the day and by the ending time at the end of the day.) There are scores of different solid tours available (each one with a different starting time and ending time). The demand process (the non-homogeneous arrival process) usually can be forecast with a reasonable amount of accuracy. The first stage of the problem is to find the number of personnel to hire in each solid tour such that the total number of people available in each time interval fits the demand curve as accurately as possible. This problem leads to an integer programming formulation of a special structure that actually can be solved in polynomial time. After it has been determined in the first stage what the actual number of people in each solid tour is, the placement of the breaks is done in the second stage. Typically, the break placement problem is a very hard problem, and is usually dealt with through a heuristic.

Free Webinar

Gans et al. (2009) applied parametric stochastic programming to workforce scheduling in call centers. Gurvich et al. (2009) have developed a chance constrained optimization approach to deal with uncertain demand forecasts. Brazier et al. (1999) applied an artificial intelligence technique, namely co-operative multi-agent scheduling, on the workforce scheduling in call centers; their work was done in collaboration with the Rabobank in the Netherlands. Harrison and Zeevi (2005) developed a staffing method for large call centers based on stochastic fluid models.

Nowadays another aspect of personnel management in call centers has become important, namely call routing. As the clients may have many different demands and the operators may have many different skill sets, call routing has gained a significant amount of interest, see Mehrotra et al. (2009) and Bhulai et al. (2008). Optimal call routing typically has to be considered in conjunction with the optimal cross training of the operators, see Robbins et al. (2007).

7.3. Miscellaneous Issues Concerning Workforce Scheduling in Financial Services

Very little research has been done on workforce scheduling in other segments of the financial services industry. It is clear that workforce scheduling is also important in trading departments or at trading desks (equity trading, foreign exchange trading, or currency trading). Such departments have to keep track of information on trades, such as the number of trades done per day for each trader, amounts per trade and total, and trades’ impact with regard to risk limits of each trader or the entire desk. Automated systems exist for recording such information, but operational risk mitigation practices necessitate the involvement of skilled personnel in populating and reviewing of the data. In the case of a broker/dealer, relevant pieces of information may include the commission charged for each trade, whether the firm acted in a capacity of principal or agent, and trading venue (e.g., an exchange, a dark pool, or an electronic communication network). Institutional and retail brokerage and asset management firms need to maintain databases with account-level, client-level, and relationship-level information. Typically such information is taken from forms filled out by clients on hard copy or electronically, and/or from legal agreements signed by the client and the firm. Skilled personnel must perform the back-office tasks of reviewing documents, and populating the appropriate fields through user interfaces of databases. Given the regulatory implications of errors in this process (e.g., Sarbanes-Oxley 404 compliance), the personnel assigned to such tasks must have received a rigorous specialized training. A factor that may add complexity to workforce scheduling is the emerging trend of outsourcing such back-office tasks, especially to offshore locations in Asia or Europe. In offshore outsourcing, time-zone differences and high turnover of trained professionals can be serious issues to contend with.

As stated earlier, workforce scheduling and waiting line management are strongly interconnected. A larger and better trained workforce clearly results in a better queueing performance. However, workforce scheduling is also strongly connected to operational risk. A larger and better trained workforce clearly results in a lower level of operational risk (especially in trading departments, where human errors can have a very significant impact on the financial performance of the institution). This topic will be discussed in more detail in the next section on operational risk.


8.1. Types of Operational Failures

Operational risk in financial services started to receive attention from the banking community as well as from the academic community in the mid-1990s. Operational risk has since then typically been defined as the risk resulting from inadequate or failed internal processes, people, and systems, or from external events (Basel Committee 2003). It covers product life cycle and execution, product performance, information management and data security, business disruption, human resources and reputation (see, e.g., the General Electric Annual Report 2009, available at www.ge.com). Failures concerning internal processes can be due to transactions errors (some due to product design), or inadequate operational controls (lack of oversight). Information system failures can be caused by programming errors or computer crashes. Failures concerning people may be due to incompetence or inadequately trained employees, or due to fraud. Actually, since the mid-1990s a number of events have taken place in financial services that can be classified as rogue trading. This type of event has turned out to be quite serious because just one such an event can bring down a financial services firm, see Cruz (2002), Elsinger et al. (2006), Chernobai et al. (2007), Cheng et al. (2007), and Jarrow (2008).

8.2. Relationships between Operational Risk and Other Research Areas

It has become clear over the last decade that the management of operational risk in finance is very closely related to other research areas in operations management, operations research, and statistics. Most of the methodological research in operational risk has focused on probabilistic as well as statistical aspects of operational risk, for example, extreme value theory (EVT) (see Chernobai et al. 2007). The areas of importance include the following:

(i) Process design and process mapping,

(ii) Reliability theory,

(iii) TQM, and

(iv) EVT.

The area of process design and process mapping is very important for the management of operational risk. As discussed in earlier sections of this paper, Shostack (1984) focuses on process mapping in the service industries and, in particular, process mapping in financial services. Process mapping also includes an identification of all potential failure points. This area of study is closely related to the research area of reliability theory.

The area of reliability theory is very much concerned with system and process design issues, including concepts such as optimal redundancies. This area of research originated in the aviation industry; see the classic work by Barlow and Proschan (1975) on reliability theory. One major issue in financial services is the issue of determining the amount of backups and the amount of parallel processing and checking that have to be designed into the processes and procedures. Reliability theory has always had a major impact on process design.

Procedures that are common in TQM turn out to be very useful for the management of operational risk, for example, Six Sigma (see Cruz and Pinedo 2008). However, it has become clear that there are important similarities as well as differences between TQM in the manufacturing industries and operational risk management in financial services. One important difference is that TQM in manufacturing (typically referred to as Six Sigma) is based on statistical properties of the Normal (Gaussian distribution), because any parameter that is being measured may have an equal probability of having a deviation in one direction as in the other. However, in operational risk management the analysis of operational risk events focuses mainly on the outliers, i.e., the catastrophic events. For that reason, the distributions used are different from the Normal; they may be, for example, the Lognormal, or the Fat Tail Lognormal.

The statistical analysis often focuses on the occurrences of rare, catastrophic events, because financial services in general are quite concerned about being hit by such rare catastrophic events. It is for this reason that EVT has become such an important research direction, see De Haan and Ferreira (2006). EVT is based on a limit theorem that is due to Gnedenko; this limit theorem specifies the distribution of the maximum of a series of independent and identically distributed (i.i.d.) random variables. These extreme value distributions include the Weibull, Frechet, and Gumbel distributions. They typically have three parameters, namely one that specifies the mean, one that specifies the variance, and a third one that specifies the fatness of the tail. The need for being able to parameterize the fatness of the tail is based on the fact that the tail affects the probabilities of catastrophic events occurring.

8.3. Operational Risk in Specific Financial Sectors

There are several sectors of the financial services industry that have received a significant amount of attention with regard to their exposure to operational risk, namely,

(i) Retail financial services (banking, brokerage, credit cards),

(ii) Trading (equities, bonds, foreign exchange),

(iii) Asset management (retail, institutional).

There are several types of operational risk events that are common in retail financial services. They include security breaches, fraud, and systems breakdowns. Security breaches in retail financial services often involve clients’ personal information such as bank account numbers and social security numbers. Such events can seriously damage an institution’s reputation, bringing about punitive regulatory sanctions, and, in some well-known instances, becoming the basis for class-action lawsuits. There has been an extensive body of research in the literature that has focused on understanding the risk of information security (see Bagchi and Udo 2003, Dhillon 2004, Hong et al. 2003, Straub and Welke 1998, Whitman 2004). Garg et al. (2003) have made an attempt to quantify the financial impact of information security breaches. Fraud as a cause of an operational risk event is a major issue in the credit card business. An enormous amount of work has been done on credit card fraud detection, mainly by researchers in artificial intelligence and data mining; see, for example, Chan et al. (1999). Upgrades and consolidations of information systems may be major causes of operational risk events associated with systems breakdowns.

Operational risk events in trading may be due to human errors, rogue trading (i.e., either unauthorized trades or illegal trades), or system breakdowns. A number of such events have occurred in the last two decades with catastrophic consequences for the institutions involved. Several rogue traders have caused their institutions billions of dollars in losses, see Netter and Poulsen (2003).

Asset management has also been one of the areas within the financial services sector that have recently received a significant amount of research attention as far as operational risk is concerned. During the financial crisis of 2007–2009, some of the most catastrophic losses suffered by investors resulted from failures to properly address operational risk, for example, in the fraud perpetrated by Bernard Madoff (see Arvedlund 2009). Operational risk can be effectively addressed by implementing robust operational infrastructures and controls in organizations that enjoy strong governance, as presented by Alptuna et al. (2010). Operational risk is most salient in the loosely regulated domain of hedge funds. Several studies have shown that many of the hedge funds that have gone under had major identifiable operational issues (e.g., Kundro and Feffer 2003, 2004). Brown et al. (2009a) propose a quantitative operational risk score o for hedge funds that can be calculated from data in hedge fund databases. The purpose of the o score is to identify problematic funds in a manner similar to Altman’s z score, which predicts corporate bankruptcies, and can be used as a supplement for qualitative due diligence on hedge funds. In a subsequent study the same authors, Brown et al. (2009b), examine a comprehensive sample of due diligence reports on hedge funds and find that misrepresentation, as well as not using a major auditing firm and third-party valuation, are key components of operational risk and leading indicators of future fund failure.

8.4. Other Research Directions

There are a number of other important research directions that already have received some research attention and that deserve more attention in the future. First, how can we analyze the trade-offs between operational costs (productivity) and operational risk? In the manufacturing and in the services literature, some articles have appeared that discuss the tradeoffs between costs and productivity on the one hand and quality on the other hand; see, for example, Jones (1988) and Kekre et al. (2009). However, this issue has not received much attention yet as far as the financial services industry is concerned. Second, a fair amount of research has been done with regard to mitigation of operational risk. The financial services industry has thoroughly studied what the aviation industry has been doing with regard to operational risk, see Cruz and Pinedo (2008). In particular they have considered near-miss management practices of operational risk, see Muermann and Oktem (2002). Two more recent approaches for mitigating operational risk include insurance (in particular with regard to rogue traders, see Jarrow et al. 2010) and securitization (e.g., catastrophe bonds), see Cruz (2002). However, these issues concerning mitigation require more study. Third, it has been observed that there is a significant interplay between operational risk, market risk, and credit risk. For example, when the markets are very volatile, it is more likely that human errors may be made or systems may crash. These correlations have to be analyzed in more detail in the future.

The growing body of research on operational risk in financial services presents interesting cross-fertilization opportunities for operations management researchers, given the increasing visibility of operational risk and the potential losses in financial services. For banks, the Basel Capital Accord in a recent revision began to require risk capital to be reserved for potential loss resulting from operational risk. Wei (2006) developed models based on Bayesian credibility theory to quantify operational risk for firm-specific capital adequacy calculations.

1 | 2 | 3 | 4 | 5 | 6 | Next Page

Emmanuel D. (Manos) Hatzakis, Suresh K. Nair, and Michael Pinedo


NYSSA Job Center Search Results

To sign up for the jobs feed, click here.


NYSSA Market Forecast™: Investing In Turbulent Times
January 7, 2016

Join NYSSA to enjoy free member events and other benefits. You don't need to be a CFA charterholder to join!


CFA® Level I 4-Day Boot Camp

Thursday November 12, 2015
Instructor: O. Nathan Ronen, CFA

CFA® Level II Weekly Review - Session A Monday

Monday January 11, 2016
Instructor: O. Nathan Ronen, CFA

CFA® Level III Weekly Review - Session A Wednesday

Wednesday January 13, 2016
Instructor: O. Nathan Ronen, CFA

CFA® Level III Weekly Review - Session B Thursday
Thursday January 21, 2016
Instructor: O. Nathan Ronen, CFA

CFA® Level II Weekly Review - Session B Tuesday
Thursday January 26, 2016
Instructor: O. Nathan Ronen, CFA