Leibniz_binary_system_1697_cropped.jpg

Universal Numbers (unum) in Fortran

Updating code with derived types

I spent yesterday getting back into the unum code, and I quickly realized how my previous stubbornness around using only 64-bit integers as containers for the unums instead of using some sort of user-defined data type was making the code way more difficult to understand. On top of that, without unique data types, the code is a lot more prone to error and there were some compromises, like using exact floating point equivalence (think "x == 1.0") that were making me uncomfortable.

But what about the performance hit? After all, why do all this work in Fortran if all the bells and whistles will just slow it down? So to do a very basic test to see how derived types might impact performance, I wrote this short program. 

program oo_speed_test
  use ISO_FORTRAN_ENV
  implicit none

  type :: int_container
    integer (INT64) :: val
  end type int_container

  real  (REAL64):: t, t1
  integer (INT64) :: i
  integer (INT64) :: x, y
  type (int_container) :: a, b

  y=1
  x=0

  call cpu_time (t)
  do i = 1, 1000000000
    x = y + x
  end do
  call cpu_time (t1)
  print *, 'Elapsed time for bare integer is ', t1-t

  a%val=1
  b%val=0
  call cpu_time (t)
  do i = 1, 1000000000
    b%val = a%val + b%val
  end do
  call cpu_time (t1)
  print *, 'Elapsed time for derived integer is ', t1-t

end program oo_speed_test

Amazingly, the second loop is slightly faster than the first. The normal integer loop executes in about 1.65s on my computer (using default compiler options) with the derived type loop taking about 1.55s. Perhaps the more computer savvy among you can tell me why this could be, but for now I'm satisfied with at least some sign that using derived types won't cripple the execution speed of the unum code.  

So, with this assurance, I created three types:

  1. unum_t: Contains one field, u, which is a 64-bit integer
  2. ubound_t: Contains a vector, ub, of two unum_t numbers
  3. general_interval_t: Contains two vectors. One vector, endpoints, is of length two and contains the real numbers representing the end points of the interval. The second vector, are_closed, contains two boolean/logical values indicating whether the corresponding end point is opened or closed.

Another great benefit to derived types is that I can create interfaces for them in the default Fortran operators and intrinsic functions. For instance, I can now say that "a==b", where both a and b are unums, and I can do the bitwise operations iand and ieor for pure unum inputs. Eventually, this will also let me use normal operators like +, -, *, /, etc. with unums, so people using this package won't have to think about bit logic or internal representations in order to use them. Ideally, this would also allow previously written code to be updated with unums pretty painlessly; just add the package, redefine certain variables to be unums, and that should be mostly it.    

Speaking of basic math, that should be in the next round of updates, which will hopefully be coming soon.

David PernerComment
Github repository

Not quite a full working version just yet, but I do have the code I've written so far posted on Github. 

https://github.com/CrashBurnRepeat/fUnums

As of this writing, most of the abilities are to set up the unum environment, convert to and from reals, and to display unums in a human-readable fashion. I'm planning to get basic math implemented in the next few days. Then I should be able to do my first experiments, so looking forward to that.

On a more random note, I've learned a fair amount about how Fortran deals with number types. It's been interesting, and also extremely frustrating at times, but if it weren't then I probably wouldn't be learning anything, so there's that.

David PernerComment
So why implement this all in Fortran?

Long story short: speed.

Like I said in my previous description of unums, a big part of my interest in doing this implementation is to experiment with the number type myself. If the language is too slow, then the number of experiments I can run goes down and the overall usefulness of the code I write diminishes. Fortran is up there with C in terms of speed, and it's interoperable from a number of other languages such as C, Python, Julia, Matlab, and others. In other words, as long as I implement it efficiently, it should be fast and portable, two great qualities.

There are other reasons as well. Most obviously, I'm trying to refine my Fortran skills, and so this is a great project for that. Another is that being able to parallelize calculations done with unums seems to be a big selling point for Gustafson, and Fortran has several ways to parallelize its code. Granted, Fortran isn't unique in this regard, but it is one of the languages I feel is putting the most effort into parallelized numerical calculations. Perhaps this is another "speed" reason, but even just being able to test this ability seems more natural in Fortran than other languages.

I'll be very curious to see how much of a performance hit you take moving from native reals to unums, and how that varies depending on the task, I'll be thrilled if the result is that unums are "only" 5-10 times slower, which to me would be a fantastic result compared to data types that actually have dedicated hardware on the CPU to run quickly. 

Guess we'll find out soon, at least for basic tasks.

David PernerComment
Introduction: What is a unum?

Most of us are probably familiar with the idea of binary, the strings of ones and zeros that comprise the entirety of the computer code we use on a day to day basis. Not only do instructions need to be expressed in binary, but of course everything else computers do needs to be expressed this way too. This includes numbers.

Now, for the integers (1, 2, 3, etc.) this is relatively simple. Instead of using base-10 like we're used to, binary works in base-2 (hence bi-nary). It works something like this, in decimal/binary pairs respectively:

  • 0, 0
  • 1,1
  • 2, 10
  • 3, 11
  • 4, 100
  • 5, 101,
  • 6, 110
  • 7, 111

If this concept is new, Wikipedia has a very extensive article that can do a better job than I can explaining this. But the point is that for any integer, we can create a binary counterpart without too much effort.

What about the rest of the real numbers, i.e., everything with a decimal place. Traditionally, these are handled by what are often called floats or reals. They're a more sophisticated number format than integers, with the name "float" coming from the fact that the location of the decimal place "floats" depending on the value of the particular binary string. Again, this is all very complicated, and Wikipedia can again do a better job than I can at explaining it.

So what's the problem? We've got out integers, we got everything that's not an integer, we're good, right? Well, not exactly. The problem is that there are uncountably infinite numbers, so we can't represent them all with only 32, 64, or even 128 bit long binary strings. In the standard that dictates how floats work, when a value is created that sits between two numbers the float can actually represent, the value is rounded off (somehow) to the nearest answer. This has some surprisingly relevant results. For instance, the decimal number 0.1 has no neat representation in binary. So yes, every time a computer deals with a dime in financial software, it's actually dealing with something ever so slightly larger than a dime. Same for a penny. Granted, the amount over is minuscule, and I suspect that these amounts are tracked in terms of cents in order to avoid this rounding, but still, what happens if you want to find 10% of an amount? 

A very interesting example to me was the following code:

program real_ulp_test
  use ISO_FORTRAN_ENV, only : REAL32
  implicit none

  real (REAL32) :: sum = 0.0
  integer :: i

  do i = 0, 1000000000
    sum = sum + 1.0_REAL32
  end do

  print *, 'Final value is ', sum

end program real_ulp_test

In a nutshell, this code is simply taking a float (in Fortran called a real), setting it to zero, and then adding a 1 to it 1 billion times. What do you think the end result will be? Obviously not 1 billion or else this would be an easy question. So how much will it be off by? 1%? 10%? After all, a 32-bit float can express numbers in the range of a billion, so this shouldn't be too bad in that regard.

Well, I ran this code to see for myself. 

The answer is 16777216. It's an error of just over 98%. 98%!

What's happening here is that after you reach 16777216, the next number up that can be represented as a (in this case) single precision float is more than 1 away. So the result of the addition is always rounded down to 16777216, regardless of how many times you add 1 to it. It just becomes stuck.

Granted, this example is a little overblown on the percent error. If you were only counting to 16777217, there'd barely be any error at all. Still, the point is that seemingly simple operations can have error creep into them, even if there's nothing wrong in the problem statement itself.

Here is where unums come in. They, to a large extent, follow the float format. They have the ideas of "fraction" and "exponent", kind of the binary version of scientific notation. They also have a bit representing whether the number is positive or negative. Seems simple so far. But they also have three additional fields representing:

  1. The number of digits in the fraction
  2. The number of digits in the exponent
  3. Whether the value is exact or inexact.

This allows unums to represent numbers that traditional floats can't, in addition to the ability to represent that the number calculated sits between numbers it can exactly represent. Especially this last feature is very new, and Gustafson (the proposer of this format) espouses it as a way to always express correct quantities, even if they're imprecise. This imprecision is both a blessing and a curse: to Gustafson it represents a way to do math that was previously unavailable to computers, but to detractors (from what I've seen) it's just another incarnation of interval math, an idea that sounds good, but tends to cascade into useless answers, like that your number lies between positive and negative infinity.

Is this a revolution in numerical analysis? Or is it an old idea in new clothing? Hopefully by the end of this project I'll have a more balanced idea of the claims both sides are making, And if it really is revolutionary, all the better. 

David PernerComment