Risk Management FAQs

How do you statistically estimate changes to a market variable (for example, in order to calculate a potential future exposure)?
Most everyone uses some variation of the Lognormal Estimation formula.

What is the Lognormal Estimation formula?
The Complete (see also Variation below) Lognormal Estimation Formula is:

Projection = Current * exp[ vol * sqrt(t) * conf int * +/-1 - .5 * vol^2 * t]

where

Projection = the projected value of a variable
Current = the current value of a variable
exp = exponentiation (e raised to a power)
vol = annual volatility
sqrt(t) = square root of time in years
confidence interval = # of standard deviations required for a given confidence level
+/-1 = +1 for an increasing variable and -1 for a declining variable

EXAMPLE:

Current Value of Variable A = 100
Annualized Volatility for Variable A = 10%
Confidence Interval for 1-tail 95% confidence level = 1.645

What is the highest value that Variable A can reach within a 95% confidence level over 2 years?

Projection = Current * exp[ vol * sqrt(t) * conf int * +/-1 - .5 * vol^2 * t ]
126.2 = 100 * exp[ .10 * sqrt(2) * 1.645 * 1 - .5 * .10^2 * 2 ]

The projected value of the variable = 124.9

The following chart shows the typical estimate pattern over time.

Lognormal Chart

Key features of this estimation technique are:
  • For any given volatility, the further into the future you estimate, the higher the increase and the lower the decrease. This is a desirable quality in a risk management process. Without it, it would imply that a longer term risk is lower than a shorter term risk, which is counter-intuitive.
  • For any given point in time, the higher the volatility, the higher the increase and the lower the decrease. An intuitively appealing property.
  • The projections are asymmetrical; they increase infinitely but decrease at a decreasing rate, approaching, but never reaching zero. This is also a desirable quality since most market variables cannot be negative.

Variation of the Lognormal Estimation Formula: In some applications, the Lognormal Estimation Formula is modified so that -.5 * vol^2 * t is omitted. At low volatilities and short time frames, the inclusion or omission of this term will have a negligible impact on the estimation.
However, at higher volatilities and/or longer time frames, the omission of this term will result in higher increasing and lower decreasing estimations.
This is done as a conservative precautionary measure because the inclusion of the term may result in longer term increasing estimations being lower than shorter term increasing estimations and longer term decreasing estimations being higher than shorter term decreasing estimations, which is counter-intuitive and undesirable within a risk management framework. However, the risk management process must address this issue to insure that potentially unreasonable estimations do not result in incorrect decisions.


What are the underlying assumptions for using the Lognormal Estimation formula?
The Lognormal Estimation formula is based on the assumption that the underlying variable has the property that the log of its returns are normally distributed.

Is this a valid assumption for most financial variables?
Sort-of-kind-of, but not strictly. Most financial variables have "fat-tailed" distributions, which means that the probability of the extreme events (the tails) is higher than what you would expect under a strict normal distribution. That is why the "once in 100 year event" seems to happen every few years.

Fat Tail Graph

If everyone knows that the normal distribution assumption is not strictly valid, why do they still use it?
The primary reason to use it is that there is no viable statistical alternative. Several years ago researchers reported that they had figured out the shape of the fat-tail by looking at large amounts of diverse data. While this is an important beginning, working with the distribution is not viable without a mathematical framework to make calculations, and the framework has not yet been developed.

Additionally, it is argued that given the sensitivity of the result to the volatility input variable (which is subject to interpretation), the additional error caused by using a somewhat flawed statistical approach is insignificant.

If the user is still uncomfortable with the potential understatement of the risk due to the model limitations, the confidence level can be increased, e.g., from 95% to 97.5%.

What are some of the most commonly encountered problems with the use of this approach?
Extremely high estimates - Since the projections will increase infinitely, this formula may result in extremely high values for variables. This problem manifests itself most commonly in interest rates. Interest rates, at least in developed countries, are usually assumed to "mean revert", i.e., they may get high for a period of time, but they will then come back down within "normal" levels. The model may estimate such high rates that most users would question their economic viability. In the case of developing countries, high volatilities combined with high current rates can also produce dramatically high estimates, although arguments against them may be muted.

Developing Market currencies - If the currency is pegged, the observed historical volatility will be zero. If the currency is a managed, the volatility may be relatively low and/or there may be a managed devaluation.

The zero or low volatility will produce a zero or low estimated change, which ignores the very real possibility of a sudden devaluation. This contingency is sometimes modeled with a jump process. Briefly, jump processes allow for the inclusion of a statistical probability for a sudden change in the variable (typically a drop inthe value of the currency).

Additionally, managed devaluations should be reflected in the estimate by including a non-zero drift term.

Individual equities - The use of this statistical approach for equity indices is usually considered reasonable. However, its use for individual equities is usually frowned upon because price changes of individual equities are subject to the idiosyncratic risks of the company (i.e., there are so many jumps that the "normal distribution" assumption cannot be considered valid).