 Math nightmares ## Math nightmares

It's been one year since I have launched the Web HP 12C emulator, of course written in Javascript. Currently, it is responsible by more than 50% of hits to my site, and probably 99% of my Adsense revenue.

It was a complete surprise for me, considering that I wrote the first version as a distraction to stand some long hours in airport connections. Even the Mac OS X widget version is "selling" well, something around 70 package downloads per day, in December.

Other pages that have quite good hit rates are the Black-Scholes calculator and the current implied volatilities of BOVESPA options. All of those involve some "advanced" financial mathematics, and maybe someone is interested on the challenges I've had.

First, the HP-12C financial registers, the feature most often used by HP-12C owners. We have the number of payments ("n"), interest ("i"), present value ("PV"), payment value ("PMT") and future value ("FV"). Every register can be calculated based on the other four by simple formulas... except the interest.

For the interest, we need to use some trail-and-error technique, like linear interpolation. Because of that, is actually easier to implement all calculations using interpolation, just tweaking the register we want to find, until Net Present Value becomes zero (which means all five registers have values that fit each other). UPDATE: calculating the interest needs additional techniques.

The basic algorithm is something like this:

```set initial estimates of the target register equal to 0 and 1
do while not tired of trying:
calculate NPV for the first estimate
calculate NPV for the second estimate
if absolute value of NPV less than 0.0000001:
success, exit loop
make new (linear) estimate based on previous NPVs
```

The first time I had implemented that, it was 1997. It never caused me any problems. Until now. During some stress tests with really big numbers, comparing the emulator with a real HP-12C, the emulator began to report "Error 5" in several financial calculations, where the real HP did just well. What the hell?!

Some of you may have already figured out the problem: my constant 0.0000001 is not exactly an adequate "epsilon" to detect success of the linear interpolation. Floating-point has only 16 decimal cases of precision, so if the magnitude of fixed financial registers gets too big (e.g. 109) it is likely impossible for the interpolation to settle NPV below 10-7.

Actually the problem is a bit worse, since the NPV calculation involves multiplications and exponents, which amplify the rounding errors. So, we can't e.g. demand a tolerance precision of 7 decimals when the input magnitude is 9 significant digits (7+9=16). We must be more liberal. Some tests on HP calculator showed that it always found a solution having only 10 decimal digits of precision, at the cost of small imprecision in the last digit.

The solutions I found are: to set "epsilon" proportional to input values (e.g. if the sum of absolute values of registers are 1050, epsilon will be around 1042), and not trying to demand 16 digit precision, but only 8, which works even if "n" (which is an exponent in NPV formula) is quite big.

UPDATE: the interpolation method is actually broken to calculate interest, because interest is not monotonic in some cases. I had to resort to the bisection method as I did for the option volatility, as explained below.

#### The floating point phantom haunts some other pages

Another problem, similar to this one, was found in the Black-Scholes calculator as well as in the implied volatility page. When the implied volatility calculation could not find a result, it reported "Impossible to calculate" at the page. It tipically happens then an option has a market value that is too low, which would imply a negative volatility, which is impossible. Since it only happens for thinly traded options (that is, the market value is outdated and therefore invalid anyway), it used not to be a problem.

But, when the stock markets crashed in October 2008, several options ceased to show the volatilities. At first I thought it happened because of option values were behind the corresponding stock values, but a closer look revealed that it was not the case: options with quite big premiums were in the bunch of "impossibles". Great...

As happens with interest, the implied volatility also doesn't have a closed formula, it must be found by trail-and-error. I was employing linear interpolation, since I was already used to it due to HP emulator. But most texts talk about using Newton's method for volatility. Without thinking too much, I thought that was the problem, and reimplemented the volatility calculation using Newton's method.

And surprise, no success. That same options kept reporting "Impossible to calculate implied volatility". Then, actually thinking in math terms, Newton's method is no "better" than linear interpolation. It is just faster when things go well, and handier if you already have the derivative. In the case of Black-Scholes, if you calculate the "greeks" (and you generaly do), "vega" is the derivative you employ in Newton's method.

Making more tests, I finally found the problem. The relationship of an option's premium to its implied volatiliy forms a kind of sigmoid curve. That is, when volatility is close to zero, an increase X of volatility does little for the premium. When volatility is in a "normal" level (e.g. 35%/year) the same increase X in volatility affects the premium much more. But if volatility is too high, premium becomes again insensible to the same X increase.

What does that mean for us? It means that, if the initial volatility estimation is too low, both linear interpolation and Newton's method will generate a huge estimation of implied volatility, because premium is insensitive in that range. And, if volatility estimation was too big, the new estimation will be a big negative number — but negative volatilities do not exist.

Actually, both corner cases tend to happen in chain: since I began with initial estimation of 0, it threw the second estimation to estratosphere, and then the third estimation would be negative. And then my routine reported "Impossible to calculate".

The funny thing is, that problems began to happen because of the market crash and options' values getting too high, approaching implied volatilities of 100%/year. When the markets were calm and volatilities were 35%, the original routine was performing well... On top of that, the original routine had the same problem as HP-12C emulators — not calibrating epsilon proportionally to the magnitude of input values.

Rereading that chapter of "Black-Scholes and Beyond", I discovered another way of estimating volatility, the "method of bissections", something like this:

```set estimation A to 0
set estimation B to 200%
do while abs(A-B) > epsilon_tolerance:
C = average of A and B
P = hypothetical premium of option based on volatility C
if P less than actual option premium:
make A equal to C
if P bigger than actual option premium:
make B equal to C
```

This algorithm has the big advantage of not being affected by the sigmoid curve; it only requires the function to be monotonically increasing, as premium as a function of volatility actually is. It seems to be that it works like arresting a wild bull: you close a fence here, another there until the bull is confined in the desired space.

But this algorithm has also two disvantages, one big and one small. The big one is not being able to resolve volatiities above the upper estimate (B). In the example above, all volatilities above 200% would just be found being egual to 200%, which is obviously bad. The small disvantage is being slower than Newton's method, and demanding the same number of loop trails to solve any volatility, even within the range of "well-behaved" ones.

The solution I adopted for my calculators is an hybrid of two approaches. I kept the Newton's method, but added some safeguards. The most important is: every new volatility estimation cannot be more than 100% apart from the old estimation. This limitation guarantees that any "wild" estimation is shaved off, while well-behaved cases still keep the typical high speed of Newton's method.

Of course, that limiits the maximum volatility my algorithm can handle. By using a maximum of 100 rounds, and a maximum offset of 100%, the maximum volatility is 10000%/year, which is well beyond the range of any marked I've heard of.

Also, I calibrated the initial estimations to be sure that Newton's method would work well with volatilities up to 100%/year, since the markets are still too volatile...