Until now we have focused our attention on variance, or equivalently, standard deviation of the return, as a tool for measuring risk. The standard deviation measures the spread of the random future return from its mean. In portfolio selection we seek to minimise the variance while maximising the return. However, an investor, seeking to measure the risk inherent in an asset he holds, is naturally more concerned to place a bound on his potential losses, while remaining relaxed about possible high levels of profit. Thus one looks for risk measures which focus on the downside risk, that is, measures concerned with the lower tail of the distribution of the return. Variance and standard deviation are symmetric, so they are not good candidates in this search.
In looking for quantitative measures of the overall risk in a portfolio, we seek a statistic which can be applied universally, enabling us to compare the risks of different types of risky portfolio. Ideally, we look for a number (or set of numbers) that expresses the potential loss with a given level of confidence, enabling the risk manager to adjudge the risk as acceptable or not.
In the wake of spectacular financial collapses in the early 1990s at Barings Bank and Orange County, Value at Risk (henceforth abbreviated as VaR) became a standard benchmark for measuring financial risk.