A Primer on Value at Risk
Click to Print This Page
Value at risk, or VaR, is viewed by some as a massively important measure. It is unique in how it characterizes risk. Most measures show risk either as a percentage (as standard deviation and tracking error do) or in units (as the Sharpe and Treynor risk-adjusted measures do). VaR shows risk in terms of money—that is, the money that might be lost.
The main purpose of VaR is to assess market risks that result from changes to market prices. VaR assesses risk by using standard statistical techniques that are routinely used in other technical fields. It can be viewed formally as measuring the worst expected loss over a given horizon at a given confidence level (Jorion 2001). We will explain more about this shortly.
To many, VaR sounds like a complex method to evaluate risk. This is due, perhaps, to the approaches that are used, which often sound complicated; this article hopes to make these less perplexing. In addition, how VaR is expressed is unique, which perhaps is why some find it difficult to understand. There are three key characteristics of any VaR statistic:
- Money amount
- Time frame
- Confidence level
For example, we might say that a portfolio’s VaR is $1 million, over the next week, at a confidence of 95%. This means that there is a 95% chance that the most we can lose over the next week is $1 million. Can we lose more? Yes, because we are only confident at the 95% level. There is a 5% chance we can lose more. Might we lose less? Yes, of course. We might not lose anything; this is just a worst-case scenario. It is not truly the worst case, since we could lose more, but it is the worst case within a certain level of confidence. Now you can see why this is confusing.
If we increase our confidence level (to 98%, for example), our VaR increases because we are taking into consideration even more bad events that might occur. As we increase our confidence level, the potential loss increases. If we increase the time horizon, our VaR will also increase, because we have extended the time in which bad things may occur.
Jorion (2009) offers a few caveats regarding VaR: First, it does not describe the worst loss given the employment of a confidence level, meaning we have an expectation that there will be times when the loss will be greater. Second, VaR does not describe the loss in the left tail but rather indicates the probability of such a value occurring. Finally, VaR is measured with some degree of error as it is subject to normal sampling VaRiation. Given the options available when employing VaR—confidence interval, length of sample period—different results can be obtained.
There are three approaches used to calculate VaR:
- VARIANCE COVARIANCE (VCV): This method uses the variance and covariance of our assets as parameters and assumes that our distribution of returns is normal. In reality the distribution is probably not normal. In spite of this conflict, the VCV remains a commonly used measure and is the one to which we will devote most of our time.
- HISTORICAL: This approach looks at our returns over some prior period (e.g., for the last 1,000 days) and ranks them from worst to best. We then pick a level we are interested in (e.g., 95%) and use that return as our prediction of what is the worst that can happen. We then apply this return to our portfolio to determine what the impact would be. There is no requirement to assume a normal distribution. This approach involves applying the portfolio’s current weights to a time series of historical returns. Jorion (2009) describes this as replaying a “tape” of history with current weights. While an advantage of this method is that it makes no assumptions regarding the distribution, it relies on a short historical window, which may not contain likely market moves, and therefore may miss certain risks. This approach is based on a window of recent historical data that employs an assumption that this window reflects the range of future outcomes. When this is not the case, the results can be misleading.
- MONTE CARLO (MC): This method does not require our returns to be normally distributed but does make certain assumptions about the distribution (for example, that it might be leptokurtic, with a higher peak around the mean and fatter tails than in a normal distribution). Random numbers are created and a simulation is run to try to estimate what might occur; from this we derive the VaR. This method is similar to the historical method, except that random drawings from a prespecified distribution are used to predict market movements. A random number generator produces a distribution from which the returns are drawn and from which the VaR is derived. This method has a significant computational requirement and entails assumptions about the stochastic process. There are also sampling issues because different random numbers will result in different results. The key benefit of the MC VaR is its ability to deal with exotics, such as path-dependent options.
We will step through an example of how to derive the VaR for a portfolio (with two securities at a combined market value of $2 million) using the VCV method. Everything we show can be replicated in Excel, though the math might get a tad more complex with examples involving many more securities.
Although we refer to this approach as variance covariance, we begin by calculating standard deviation. But, as you will see later, we convert standard deviation into variance by squaring it. We will also use correlation rather than covariance. While covariance can also be used, I was shown this approach using correlation and so employ it here. (Note that there is no advantage to one measure over the other.)
We begin by deriving the standard deviation for each of the securities in our portfolio. We take the holdings as of today and find their returns for a prior period. Here we have some flexibility in deciding how far back we wish to go. We would expect to use daily returns and need to go back at least 30 trading days. (To have a normal distribution we need to have at least 30 elements in our sample size.) The standard deviation formula is quite simple:
ri = the return for period i
r = average return for the period
n = number of discrete periods over which standard deviation is being measured
The STDEV Excel function can be used to replicate this formula. The n-1 is used because we are dealing with a sample rather than the entire population. (An alternative formula is the STDEVP function, which replaces n-1 with n, or the total number of returns in the period. There is no consensus as to which approach to employ, though STDEV is a bit more conservative.)
a 750,000 37.5% 25% 20% b 1,250,000 62.5% 40% Total 2,000,000 100%
We have to measure the correlation between each possible pairing of securities in our portfolio. The Excel function for correlation is CORREL. In our example we will use two securities to keep our math simple, though later we will discuss what is involved when you have more than two.
Table 1 provides the details we will use for our VaR calculation. Note that we assume a zero mean, which is a conservative assumption. We next need to derive the standard deviation for the entire portfolio. While you might think this would be a simple or weighted average, it is not. The formula is a tad more complicated:
wa = stock a’s weight
wb = stock b’s weight
σa = stock a’s annual standard deviation
σb = stock b’s annual standard deviation
ρa,b = correlation of a and b
While this formula might look challenging, it is really quite simple to employ. As noted above, the square of the standard deviation is the variance.
When we apply this formula to our values we obtain 28.401%. This is an annualized value. Let us say we want to derive the VaR for the next day, which is a common requirement: we will need to convert this value into its daily equivalent. This is done by dividing it by the square root of 252. (There are roughly 252 trading days in a year. One might argue that it should be 250, 251, 253, which is fine; the difference will be negligible.) The one-day standard deviation is 1.7891%.
We are now ready to calculate our value at risk. But first, we need to decide what our confidence level will be. Let us say we want to be 95% confident, what do we use? Well, recall that for a normal distribution, plus or minus one standard deviation covers roughly 68% of the distribution. So how many standard deviations do we have to move from our mean to cover 95% of the distribution? To determine this we can use the Excel NORMSINV function. (The result the function provides us is for the confidence of just one side or tail of our distribution. Here we are only interested in potential losses or the downside of the distribution.) We simply key in =NORMSINV(0.95) to obtain this value, which is 1.645. The VaR formula is:
VaR95%Confidence = P × 1.645 × σ
P = the portfolio value
σ = the portfolio’s standard deviation (which we just calculated to be 1.7891%)
And so, if we carry out this math we find our value at risk to be $58,862. And so, the “most” we can lose over the next day, at a confidence of 95%, is $58,862. Again, there is a 5% chance we could lose more, but we decided to evaluate this at the 95% confidence level.
This example was done with only two securities, but if we have 50 securities in our portfolio, is it much harder? The basic math is the same: the challenge is deriving the correlation values. We measure correlations between two securities at a time, meaning that we would have to compare a lot of relationships with 50 securities in our portfolio. How many? The formula to derive the number of correlations is:
n (n - 1)
n = the number of securities
Therefore, if we have 50 securities, the result is 1,225, meaning we would have to derive 1,225 individual correlations. What if we had 100 positions? Then we would need to derive 4,950. Imagine a scenario in which we are measuring VaR for all of our clients, and we have 1,000 portfolios with an average of 50 to 100 securities in each; we would be doing a lot of work. We would get pretty tired of doing this with Excel, which is the reason most firms employ software packages. Also, mapping procedures allow firms to avoid estimating and managing millions of correlations.
Is this all there is to VaR? Well, no. As we discussed above, there are two other methods plus a variety of alternative approaches to VaR, such as conditional VaR (CVaR), which is the expected loss if events fall outside of the confidence level. But what we have described here is a typical way to derive a basic VaR for a portfolio.
VaR, like many risk measures, has its supporters and detractors. For example, in a recent Wall Street Journal article, Eleanor Laise (2009) pointed out that the Monte Carlo approach to VaR is used to provide a more accurate assessment of risk, as it does not rely on a normal distribution. Writing for the Los Angeles Times, Morgen Witzel (2009) discussed how VaR falls short and referenced a new book by Pablo Triana, Lecturing Birds on Flying: Can Mathematical Theories Destroy the Financial Markets?, which details the model’s shortcomings. It is not the intent of this article to discuss these issues but rather to simply provide an explanation of the measure itself. To gain further insight into this topic, may we suggest a recent article by Neil A. O’Hara (2009) that appeared in the Investment Professional.
Regardless of the criticism, this measure is not going to go away. Therefore, the more you know, the better. This article provided a brief explanation of the concepts. In the next article, “Proceed with Caution: The Pitfalls of Value at Risk,” I provide further elaboration on the controversies surrounding this topic.
Jorion, Philippe. 2001. Value at Risk: The New Benchmark for Managing Financial Risk. New York, NY. McGraw-Hill.
———. 2009. Financial Risk Manager Handbook. Hoboken, NJ. John Wiley & Sons.
Laise, Eleanor. September 8, 2009. “Some Funds Stop Grading on the Curve.” Wall Street Journal.
O’Hara, Neil A. Winter 2009. “The Greater Fool Theory.” The Investment Professional, vol. 2, no. 1. 28–33..
Witzel, Morgen. September 7, 2009. “Financial Crisis Has Deep Roots in Academia.” Los Angeles Times.
–David Spaulding, CIPM, is an internationally recognized authority on investment performance measurement and president of the Spaulding Group Inc. Based in Somerset, New Jersey, the Spaulding Group is a provider of investment performance products and services, including the Journal of Performance Measurement. Spaulding is the author, contributing author, and coeditor of several books on performance measurement. The author thanks Pace University professor Aron Gottesman for discussion related to the methodologies and for reviewing an earlier draft.
To learn more about overlooked Excel formulas that are useful for financial professionals, check out Advanced Excel for Data Analysis.